00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 622 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3288 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.041 The recommended git tool is: git 00:00:00.041 using credential 00000000-0000-0000-0000-000000000002 00:00:00.043 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.057 Fetching changes from the remote Git repository 00:00:00.064 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.084 Using shallow fetch with depth 1 00:00:00.084 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.084 > git --version # timeout=10 00:00:00.109 > git --version # 'git version 2.39.2' 00:00:00.109 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.139 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.139 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.464 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.476 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.487 Checking out Revision 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 (FETCH_HEAD) 00:00:03.488 > git config core.sparsecheckout # timeout=10 00:00:03.499 > git read-tree -mu HEAD # timeout=10 00:00:03.514 > git checkout -f 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=5 00:00:03.534 Commit message: "doc: add chapter about running CI Vagrant images on dev-systems" 00:00:03.534 > git rev-list --no-walk 6b67f5fa1cb27c9c410cb5dac6df31d28ba79422 # timeout=10 00:00:03.640 [Pipeline] Start of Pipeline 00:00:03.657 [Pipeline] library 00:00:03.659 Loading library shm_lib@master 00:00:03.659 Library shm_lib@master is cached. Copying from home. 00:00:03.677 [Pipeline] node 00:00:03.695 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.696 [Pipeline] { 00:00:03.707 [Pipeline] catchError 00:00:03.708 [Pipeline] { 00:00:03.722 [Pipeline] wrap 00:00:03.729 [Pipeline] { 00:00:03.736 [Pipeline] stage 00:00:03.737 [Pipeline] { (Prologue) 00:00:03.929 [Pipeline] sh 00:00:04.209 + logger -p user.info -t JENKINS-CI 00:00:04.227 [Pipeline] echo 00:00:04.229 Node: GP11 00:00:04.237 [Pipeline] sh 00:00:04.537 [Pipeline] setCustomBuildProperty 00:00:04.546 [Pipeline] echo 00:00:04.547 Cleanup processes 00:00:04.550 [Pipeline] sh 00:00:04.829 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.829 3568789 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.841 [Pipeline] sh 00:00:05.126 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.126 ++ awk '{print $1}' 00:00:05.126 ++ grep -v 'sudo pgrep' 00:00:05.126 + sudo kill -9 00:00:05.126 + true 00:00:05.139 [Pipeline] cleanWs 00:00:05.147 [WS-CLEANUP] Deleting project workspace... 00:00:05.147 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.153 [WS-CLEANUP] done 00:00:05.156 [Pipeline] setCustomBuildProperty 00:00:05.166 [Pipeline] sh 00:00:05.446 + sudo git config --global --replace-all safe.directory '*' 00:00:05.519 [Pipeline] httpRequest 00:00:05.556 [Pipeline] echo 00:00:05.558 Sorcerer 10.211.164.101 is alive 00:00:05.566 [Pipeline] httpRequest 00:00:05.570 HttpMethod: GET 00:00:05.571 URL: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.571 Sending request to url: http://10.211.164.101/packages/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:05.586 Response Code: HTTP/1.1 200 OK 00:00:05.586 Success: Status code 200 is in the accepted range: 200,404 00:00:05.586 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:07.189 [Pipeline] sh 00:00:07.469 + tar --no-same-owner -xf jbp_6b67f5fa1cb27c9c410cb5dac6df31d28ba79422.tar.gz 00:00:07.484 [Pipeline] httpRequest 00:00:07.513 [Pipeline] echo 00:00:07.514 Sorcerer 10.211.164.101 is alive 00:00:07.520 [Pipeline] httpRequest 00:00:07.524 HttpMethod: GET 00:00:07.525 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:07.526 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:07.541 Response Code: HTTP/1.1 200 OK 00:00:07.542 Success: Status code 200 is in the accepted range: 200,404 00:00:07.542 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:23.036 [Pipeline] sh 00:01:23.323 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:26.619 [Pipeline] sh 00:01:26.905 + git -C spdk log --oneline -n5 00:01:26.905 dbef7efac test: fix dpdk builds on ubuntu24 00:01:26.905 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:26.905 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:26.905 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:26.905 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:26.924 [Pipeline] withCredentials 00:01:26.935 > git --version # timeout=10 00:01:26.948 > git --version # 'git version 2.39.2' 00:01:26.967 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:26.970 [Pipeline] { 00:01:26.979 [Pipeline] retry 00:01:26.981 [Pipeline] { 00:01:26.998 [Pipeline] sh 00:01:27.280 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:27.863 [Pipeline] } 00:01:27.887 [Pipeline] // retry 00:01:27.894 [Pipeline] } 00:01:27.916 [Pipeline] // withCredentials 00:01:27.927 [Pipeline] httpRequest 00:01:27.945 [Pipeline] echo 00:01:27.947 Sorcerer 10.211.164.101 is alive 00:01:27.958 [Pipeline] httpRequest 00:01:27.963 HttpMethod: GET 00:01:27.964 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.964 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:27.967 Response Code: HTTP/1.1 200 OK 00:01:27.967 Success: Status code 200 is in the accepted range: 200,404 00:01:27.968 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:31.384 [Pipeline] sh 00:01:31.668 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:33.586 [Pipeline] sh 00:01:33.871 + git -C dpdk log --oneline -n5 00:01:33.872 eeb0605f11 version: 23.11.0 00:01:33.872 238778122a doc: update release notes for 23.11 00:01:33.872 46aa6b3cfc doc: fix description of RSS features 00:01:33.872 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:33.872 7e421ae345 devtools: support skipping forbid rule check 00:01:33.883 [Pipeline] } 00:01:33.899 [Pipeline] // stage 00:01:33.911 [Pipeline] stage 00:01:33.914 [Pipeline] { (Prepare) 00:01:33.935 [Pipeline] writeFile 00:01:33.952 [Pipeline] sh 00:01:34.237 + logger -p user.info -t JENKINS-CI 00:01:34.250 [Pipeline] sh 00:01:34.535 + logger -p user.info -t JENKINS-CI 00:01:34.548 [Pipeline] sh 00:01:34.832 + cat autorun-spdk.conf 00:01:34.832 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.832 SPDK_TEST_NVMF=1 00:01:34.832 SPDK_TEST_NVME_CLI=1 00:01:34.832 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.832 SPDK_TEST_NVMF_NICS=e810 00:01:34.832 SPDK_TEST_VFIOUSER=1 00:01:34.832 SPDK_RUN_UBSAN=1 00:01:34.832 NET_TYPE=phy 00:01:34.832 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:34.832 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:34.840 RUN_NIGHTLY=1 00:01:34.845 [Pipeline] readFile 00:01:34.871 [Pipeline] withEnv 00:01:34.874 [Pipeline] { 00:01:34.889 [Pipeline] sh 00:01:35.179 + set -ex 00:01:35.179 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:35.179 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:35.179 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.179 ++ SPDK_TEST_NVMF=1 00:01:35.179 ++ SPDK_TEST_NVME_CLI=1 00:01:35.179 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.179 ++ SPDK_TEST_NVMF_NICS=e810 00:01:35.179 ++ SPDK_TEST_VFIOUSER=1 00:01:35.179 ++ SPDK_RUN_UBSAN=1 00:01:35.179 ++ NET_TYPE=phy 00:01:35.179 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:35.179 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:35.179 ++ RUN_NIGHTLY=1 00:01:35.179 + case $SPDK_TEST_NVMF_NICS in 00:01:35.179 + DRIVERS=ice 00:01:35.179 + [[ tcp == \r\d\m\a ]] 00:01:35.179 + [[ -n ice ]] 00:01:35.179 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:35.179 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:35.179 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:35.179 rmmod: ERROR: Module irdma is not currently loaded 00:01:35.179 rmmod: ERROR: Module i40iw is not currently loaded 00:01:35.179 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:35.179 + true 00:01:35.179 + for D in $DRIVERS 00:01:35.179 + sudo modprobe ice 00:01:35.179 + exit 0 00:01:35.189 [Pipeline] } 00:01:35.207 [Pipeline] // withEnv 00:01:35.212 [Pipeline] } 00:01:35.228 [Pipeline] // stage 00:01:35.238 [Pipeline] catchError 00:01:35.239 [Pipeline] { 00:01:35.254 [Pipeline] timeout 00:01:35.254 Timeout set to expire in 50 min 00:01:35.256 [Pipeline] { 00:01:35.272 [Pipeline] stage 00:01:35.274 [Pipeline] { (Tests) 00:01:35.290 [Pipeline] sh 00:01:35.575 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.575 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.575 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.575 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:35.575 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:35.575 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.575 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:35.575 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.575 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:35.575 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:35.575 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:35.575 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:35.575 + source /etc/os-release 00:01:35.575 ++ NAME='Fedora Linux' 00:01:35.575 ++ VERSION='38 (Cloud Edition)' 00:01:35.575 ++ ID=fedora 00:01:35.575 ++ VERSION_ID=38 00:01:35.575 ++ VERSION_CODENAME= 00:01:35.575 ++ PLATFORM_ID=platform:f38 00:01:35.575 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:35.575 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:35.575 ++ LOGO=fedora-logo-icon 00:01:35.575 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:35.575 ++ HOME_URL=https://fedoraproject.org/ 00:01:35.575 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:35.575 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:35.575 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:35.575 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:35.575 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:35.575 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:35.575 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:35.575 ++ SUPPORT_END=2024-05-14 00:01:35.575 ++ VARIANT='Cloud Edition' 00:01:35.575 ++ VARIANT_ID=cloud 00:01:35.575 + uname -a 00:01:35.575 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:35.575 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:36.512 Hugepages 00:01:36.512 node hugesize free / total 00:01:36.512 node0 1048576kB 0 / 0 00:01:36.512 node0 2048kB 0 / 0 00:01:36.512 node1 1048576kB 0 / 0 00:01:36.512 node1 2048kB 0 / 0 00:01:36.512 00:01:36.512 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:36.512 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:36.512 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:36.512 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:36.512 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:36.512 + rm -f /tmp/spdk-ld-path 00:01:36.512 + source autorun-spdk.conf 00:01:36.512 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.512 ++ SPDK_TEST_NVMF=1 00:01:36.512 ++ SPDK_TEST_NVME_CLI=1 00:01:36.512 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.512 ++ SPDK_TEST_NVMF_NICS=e810 00:01:36.512 ++ SPDK_TEST_VFIOUSER=1 00:01:36.512 ++ SPDK_RUN_UBSAN=1 00:01:36.512 ++ NET_TYPE=phy 00:01:36.512 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:36.512 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.512 ++ RUN_NIGHTLY=1 00:01:36.512 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:36.512 + [[ -n '' ]] 00:01:36.512 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.512 + for M in /var/spdk/build-*-manifest.txt 00:01:36.512 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:36.512 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.512 + for M in /var/spdk/build-*-manifest.txt 00:01:36.512 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:36.512 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:36.512 ++ uname 00:01:36.512 + [[ Linux == \L\i\n\u\x ]] 00:01:36.512 + sudo dmesg -T 00:01:36.512 + sudo dmesg --clear 00:01:36.512 + dmesg_pid=3569485 00:01:36.512 + [[ Fedora Linux == FreeBSD ]] 00:01:36.512 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.512 + sudo dmesg -Tw 00:01:36.512 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:36.512 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:36.512 + [[ -x /usr/src/fio-static/fio ]] 00:01:36.512 + export FIO_BIN=/usr/src/fio-static/fio 00:01:36.512 + FIO_BIN=/usr/src/fio-static/fio 00:01:36.512 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:36.512 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:36.512 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:36.512 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.512 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:36.512 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:36.512 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.512 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:36.512 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:36.512 Test configuration: 00:01:36.512 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.512 SPDK_TEST_NVMF=1 00:01:36.512 SPDK_TEST_NVME_CLI=1 00:01:36.513 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.513 SPDK_TEST_NVMF_NICS=e810 00:01:36.513 SPDK_TEST_VFIOUSER=1 00:01:36.513 SPDK_RUN_UBSAN=1 00:01:36.513 NET_TYPE=phy 00:01:36.513 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:36.513 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.774 RUN_NIGHTLY=1 01:22:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:36.774 01:22:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:36.774 01:22:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:36.774 01:22:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:36.774 01:22:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.774 01:22:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.774 01:22:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.774 01:22:49 -- paths/export.sh@5 -- $ export PATH 00:01:36.774 01:22:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:36.774 01:22:49 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:36.774 01:22:49 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:36.774 01:22:49 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721690569.XXXXXX 00:01:36.774 01:22:49 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721690569.6fYlEu 00:01:36.774 01:22:49 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.774 01:22:49 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:36.774 01:22:49 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:36.774 01:22:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.774 01:22:49 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:36.774 01:22:49 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:36.774 01:22:49 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:36.774 01:22:49 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.774 01:22:49 -- spdk/autobuild.sh@16 -- $ date -u 00:01:36.774 Mon Jul 22 11:22:49 PM UTC 2024 00:01:36.774 01:22:49 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:36.774 LTS-60-gdbef7efac 00:01:36.774 01:22:49 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:36.774 01:22:49 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:36.774 01:22:49 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:36.774 01:22:49 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:36.774 01:22:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:36.774 01:22:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.774 ************************************ 00:01:36.774 START TEST ubsan 00:01:36.774 ************************************ 00:01:36.774 01:22:49 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:36.774 using ubsan 00:01:36.774 00:01:36.774 real 0m0.000s 00:01:36.774 user 0m0.000s 00:01:36.774 sys 0m0.000s 00:01:36.774 01:22:49 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:36.774 01:22:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.774 ************************************ 00:01:36.774 END TEST ubsan 00:01:36.774 ************************************ 00:01:36.774 01:22:49 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:36.774 01:22:49 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@430 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:36.774 01:22:49 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:36.774 01:22:49 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:36.774 01:22:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:36.774 ************************************ 00:01:36.774 START TEST build_native_dpdk 00:01:36.774 ************************************ 00:01:36.774 01:22:49 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:36.774 01:22:49 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:36.774 01:22:49 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:36.774 01:22:49 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:36.774 01:22:49 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:36.774 01:22:49 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:36.774 01:22:49 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:36.774 01:22:49 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:36.774 01:22:49 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:36.774 01:22:49 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:36.774 01:22:49 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:36.774 01:22:49 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.774 01:22:49 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:36.774 01:22:49 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:36.774 eeb0605f11 version: 23.11.0 00:01:36.774 238778122a doc: update release notes for 23.11 00:01:36.774 46aa6b3cfc doc: fix description of RSS features 00:01:36.774 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:36.774 7e421ae345 devtools: support skipping forbid rule check 00:01:36.774 01:22:49 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:36.774 01:22:49 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:36.774 01:22:49 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:36.774 01:22:49 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:36.774 01:22:49 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:36.774 01:22:49 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:36.774 01:22:49 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:36.774 01:22:49 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:36.774 01:22:49 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:36.774 01:22:49 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:36.774 01:22:49 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:36.774 01:22:49 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:36.774 01:22:49 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:36.774 01:22:49 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:36.774 01:22:49 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:36.774 01:22:49 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:36.774 01:22:49 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:36.774 01:22:49 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:36.774 01:22:49 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:36.774 01:22:49 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:36.774 01:22:49 -- scripts/common.sh@343 -- $ case "$op" in 00:01:36.774 01:22:49 -- scripts/common.sh@344 -- $ : 1 00:01:36.774 01:22:49 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:36.774 01:22:49 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.774 01:22:49 -- scripts/common.sh@364 -- $ decimal 23 00:01:36.774 01:22:49 -- scripts/common.sh@352 -- $ local d=23 00:01:36.774 01:22:49 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:36.774 01:22:49 -- scripts/common.sh@354 -- $ echo 23 00:01:36.774 01:22:49 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:36.774 01:22:49 -- scripts/common.sh@365 -- $ decimal 21 00:01:36.774 01:22:49 -- scripts/common.sh@352 -- $ local d=21 00:01:36.774 01:22:49 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:36.774 01:22:49 -- scripts/common.sh@354 -- $ echo 21 00:01:36.774 01:22:49 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:36.775 01:22:49 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:36.775 01:22:49 -- scripts/common.sh@366 -- $ return 1 00:01:36.775 01:22:49 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:36.775 patching file config/rte_config.h 00:01:36.775 Hunk #1 succeeded at 60 (offset 1 line). 00:01:36.775 01:22:49 -- common/autobuild_common.sh@176 -- $ lt 23.11.0 24.07.0 00:01:36.775 01:22:49 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:01:36.775 01:22:49 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:36.775 01:22:49 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:36.775 01:22:49 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:36.775 01:22:49 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:36.775 01:22:49 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:36.775 01:22:49 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:36.775 01:22:49 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:36.775 01:22:49 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:36.775 01:22:49 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:36.775 01:22:49 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:36.775 01:22:49 -- scripts/common.sh@343 -- $ case "$op" in 00:01:36.775 01:22:49 -- scripts/common.sh@344 -- $ : 1 00:01:36.775 01:22:49 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:36.775 01:22:49 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:36.775 01:22:49 -- scripts/common.sh@364 -- $ decimal 23 00:01:36.775 01:22:49 -- scripts/common.sh@352 -- $ local d=23 00:01:36.775 01:22:49 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:36.775 01:22:49 -- scripts/common.sh@354 -- $ echo 23 00:01:36.775 01:22:49 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:36.775 01:22:49 -- scripts/common.sh@365 -- $ decimal 24 00:01:36.775 01:22:49 -- scripts/common.sh@352 -- $ local d=24 00:01:36.775 01:22:49 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:36.775 01:22:49 -- scripts/common.sh@354 -- $ echo 24 00:01:36.775 01:22:49 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:36.775 01:22:49 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:36.775 01:22:49 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:36.775 01:22:49 -- scripts/common.sh@367 -- $ return 0 00:01:36.775 01:22:49 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:36.775 patching file lib/pcapng/rte_pcapng.c 00:01:36.775 01:22:49 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:36.775 01:22:49 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:36.775 01:22:49 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:36.775 01:22:49 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:36.775 01:22:49 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:40.977 The Meson build system 00:01:40.977 Version: 1.3.1 00:01:40.977 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:40.977 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:40.977 Build type: native build 00:01:40.977 Program cat found: YES (/usr/bin/cat) 00:01:40.977 Project name: DPDK 00:01:40.977 Project version: 23.11.0 00:01:40.977 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:40.977 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:40.977 Host machine cpu family: x86_64 00:01:40.977 Host machine cpu: x86_64 00:01:40.977 Message: ## Building in Developer Mode ## 00:01:40.977 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:40.977 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:40.977 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:40.977 Program python3 found: YES (/usr/bin/python3) 00:01:40.977 Program cat found: YES (/usr/bin/cat) 00:01:40.977 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:40.977 Compiler for C supports arguments -march=native: YES 00:01:40.977 Checking for size of "void *" : 8 00:01:40.977 Checking for size of "void *" : 8 (cached) 00:01:40.977 Library m found: YES 00:01:40.977 Library numa found: YES 00:01:40.977 Has header "numaif.h" : YES 00:01:40.977 Library fdt found: NO 00:01:40.977 Library execinfo found: NO 00:01:40.977 Has header "execinfo.h" : YES 00:01:40.977 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:40.977 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:40.977 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:40.977 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:40.977 Run-time dependency openssl found: YES 3.0.9 00:01:40.977 Run-time dependency libpcap found: YES 1.10.4 00:01:40.977 Has header "pcap.h" with dependency libpcap: YES 00:01:40.977 Compiler for C supports arguments -Wcast-qual: YES 00:01:40.977 Compiler for C supports arguments -Wdeprecated: YES 00:01:40.977 Compiler for C supports arguments -Wformat: YES 00:01:40.977 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:40.977 Compiler for C supports arguments -Wformat-security: NO 00:01:40.977 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:40.977 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:40.977 Compiler for C supports arguments -Wnested-externs: YES 00:01:40.977 Compiler for C supports arguments -Wold-style-definition: YES 00:01:40.977 Compiler for C supports arguments -Wpointer-arith: YES 00:01:40.977 Compiler for C supports arguments -Wsign-compare: YES 00:01:40.977 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:40.977 Compiler for C supports arguments -Wundef: YES 00:01:40.977 Compiler for C supports arguments -Wwrite-strings: YES 00:01:40.977 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:40.977 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:40.977 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:40.977 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:40.977 Program objdump found: YES (/usr/bin/objdump) 00:01:40.977 Compiler for C supports arguments -mavx512f: YES 00:01:40.977 Checking if "AVX512 checking" compiles: YES 00:01:40.977 Fetching value of define "__SSE4_2__" : 1 00:01:40.977 Fetching value of define "__AES__" : 1 00:01:40.977 Fetching value of define "__AVX__" : 1 00:01:40.977 Fetching value of define "__AVX2__" : (undefined) 00:01:40.977 Fetching value of define "__AVX512BW__" : (undefined) 00:01:40.977 Fetching value of define "__AVX512CD__" : (undefined) 00:01:40.977 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:40.977 Fetching value of define "__AVX512F__" : (undefined) 00:01:40.977 Fetching value of define "__AVX512VL__" : (undefined) 00:01:40.977 Fetching value of define "__PCLMUL__" : 1 00:01:40.977 Fetching value of define "__RDRND__" : 1 00:01:40.977 Fetching value of define "__RDSEED__" : (undefined) 00:01:40.977 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:40.977 Fetching value of define "__znver1__" : (undefined) 00:01:40.977 Fetching value of define "__znver2__" : (undefined) 00:01:40.977 Fetching value of define "__znver3__" : (undefined) 00:01:40.977 Fetching value of define "__znver4__" : (undefined) 00:01:40.977 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:40.977 Message: lib/log: Defining dependency "log" 00:01:40.977 Message: lib/kvargs: Defining dependency "kvargs" 00:01:40.977 Message: lib/telemetry: Defining dependency "telemetry" 00:01:40.977 Checking for function "getentropy" : NO 00:01:40.977 Message: lib/eal: Defining dependency "eal" 00:01:40.977 Message: lib/ring: Defining dependency "ring" 00:01:40.977 Message: lib/rcu: Defining dependency "rcu" 00:01:40.977 Message: lib/mempool: Defining dependency "mempool" 00:01:40.977 Message: lib/mbuf: Defining dependency "mbuf" 00:01:40.977 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:40.977 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.977 Compiler for C supports arguments -mpclmul: YES 00:01:40.977 Compiler for C supports arguments -maes: YES 00:01:40.978 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:40.978 Compiler for C supports arguments -mavx512bw: YES 00:01:40.978 Compiler for C supports arguments -mavx512dq: YES 00:01:40.978 Compiler for C supports arguments -mavx512vl: YES 00:01:40.978 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:40.978 Compiler for C supports arguments -mavx2: YES 00:01:40.978 Compiler for C supports arguments -mavx: YES 00:01:40.978 Message: lib/net: Defining dependency "net" 00:01:40.978 Message: lib/meter: Defining dependency "meter" 00:01:40.978 Message: lib/ethdev: Defining dependency "ethdev" 00:01:40.978 Message: lib/pci: Defining dependency "pci" 00:01:40.978 Message: lib/cmdline: Defining dependency "cmdline" 00:01:40.978 Message: lib/metrics: Defining dependency "metrics" 00:01:40.978 Message: lib/hash: Defining dependency "hash" 00:01:40.978 Message: lib/timer: Defining dependency "timer" 00:01:40.978 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:40.978 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:40.978 Message: lib/acl: Defining dependency "acl" 00:01:40.978 Message: lib/bbdev: Defining dependency "bbdev" 00:01:40.978 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:40.978 Run-time dependency libelf found: YES 0.190 00:01:40.978 Message: lib/bpf: Defining dependency "bpf" 00:01:40.978 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:40.978 Message: lib/compressdev: Defining dependency "compressdev" 00:01:40.978 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:40.978 Message: lib/distributor: Defining dependency "distributor" 00:01:40.978 Message: lib/dmadev: Defining dependency "dmadev" 00:01:40.978 Message: lib/efd: Defining dependency "efd" 00:01:40.978 Message: lib/eventdev: Defining dependency "eventdev" 00:01:40.978 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:40.978 Message: lib/gpudev: Defining dependency "gpudev" 00:01:40.978 Message: lib/gro: Defining dependency "gro" 00:01:40.978 Message: lib/gso: Defining dependency "gso" 00:01:40.978 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:40.978 Message: lib/jobstats: Defining dependency "jobstats" 00:01:40.978 Message: lib/latencystats: Defining dependency "latencystats" 00:01:40.978 Message: lib/lpm: Defining dependency "lpm" 00:01:40.978 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:40.978 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:40.978 Message: lib/member: Defining dependency "member" 00:01:40.978 Message: lib/pcapng: Defining dependency "pcapng" 00:01:40.978 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:40.978 Message: lib/power: Defining dependency "power" 00:01:40.978 Message: lib/rawdev: Defining dependency "rawdev" 00:01:40.978 Message: lib/regexdev: Defining dependency "regexdev" 00:01:40.978 Message: lib/mldev: Defining dependency "mldev" 00:01:40.978 Message: lib/rib: Defining dependency "rib" 00:01:40.978 Message: lib/reorder: Defining dependency "reorder" 00:01:40.978 Message: lib/sched: Defining dependency "sched" 00:01:40.978 Message: lib/security: Defining dependency "security" 00:01:40.978 Message: lib/stack: Defining dependency "stack" 00:01:40.978 Has header "linux/userfaultfd.h" : YES 00:01:40.978 Has header "linux/vduse.h" : YES 00:01:40.978 Message: lib/vhost: Defining dependency "vhost" 00:01:40.978 Message: lib/ipsec: Defining dependency "ipsec" 00:01:40.978 Message: lib/pdcp: Defining dependency "pdcp" 00:01:40.978 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:40.978 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:40.978 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:40.978 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:40.978 Message: lib/fib: Defining dependency "fib" 00:01:40.978 Message: lib/port: Defining dependency "port" 00:01:40.978 Message: lib/pdump: Defining dependency "pdump" 00:01:40.978 Message: lib/table: Defining dependency "table" 00:01:40.978 Message: lib/pipeline: Defining dependency "pipeline" 00:01:40.978 Message: lib/graph: Defining dependency "graph" 00:01:40.978 Message: lib/node: Defining dependency "node" 00:01:42.364 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:42.364 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:42.364 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:42.364 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:42.364 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:42.364 Compiler for C supports arguments -Wno-unused-value: YES 00:01:42.364 Compiler for C supports arguments -Wno-format: YES 00:01:42.364 Compiler for C supports arguments -Wno-format-security: YES 00:01:42.364 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:42.364 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:42.364 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:42.364 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:42.364 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:42.364 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:42.364 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:42.364 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:42.364 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:42.364 Has header "sys/epoll.h" : YES 00:01:42.364 Program doxygen found: YES (/usr/bin/doxygen) 00:01:42.364 Configuring doxy-api-html.conf using configuration 00:01:42.364 Configuring doxy-api-man.conf using configuration 00:01:42.364 Program mandb found: YES (/usr/bin/mandb) 00:01:42.364 Program sphinx-build found: NO 00:01:42.364 Configuring rte_build_config.h using configuration 00:01:42.364 Message: 00:01:42.364 ================= 00:01:42.364 Applications Enabled 00:01:42.364 ================= 00:01:42.364 00:01:42.364 apps: 00:01:42.364 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:42.364 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:42.364 test-pmd, test-regex, test-sad, test-security-perf, 00:01:42.364 00:01:42.364 Message: 00:01:42.364 ================= 00:01:42.364 Libraries Enabled 00:01:42.364 ================= 00:01:42.364 00:01:42.364 libs: 00:01:42.364 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:42.364 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:42.364 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:42.364 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:42.364 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:42.364 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:42.364 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:42.364 00:01:42.364 00:01:42.364 Message: 00:01:42.364 =============== 00:01:42.364 Drivers Enabled 00:01:42.364 =============== 00:01:42.364 00:01:42.364 common: 00:01:42.364 00:01:42.364 bus: 00:01:42.364 pci, vdev, 00:01:42.364 mempool: 00:01:42.364 ring, 00:01:42.364 dma: 00:01:42.364 00:01:42.364 net: 00:01:42.364 i40e, 00:01:42.364 raw: 00:01:42.364 00:01:42.364 crypto: 00:01:42.364 00:01:42.364 compress: 00:01:42.364 00:01:42.364 regex: 00:01:42.364 00:01:42.364 ml: 00:01:42.364 00:01:42.364 vdpa: 00:01:42.364 00:01:42.364 event: 00:01:42.364 00:01:42.364 baseband: 00:01:42.364 00:01:42.364 gpu: 00:01:42.364 00:01:42.364 00:01:42.364 Message: 00:01:42.364 ================= 00:01:42.364 Content Skipped 00:01:42.364 ================= 00:01:42.364 00:01:42.364 apps: 00:01:42.364 00:01:42.364 libs: 00:01:42.364 00:01:42.364 drivers: 00:01:42.364 common/cpt: not in enabled drivers build config 00:01:42.364 common/dpaax: not in enabled drivers build config 00:01:42.364 common/iavf: not in enabled drivers build config 00:01:42.364 common/idpf: not in enabled drivers build config 00:01:42.364 common/mvep: not in enabled drivers build config 00:01:42.364 common/octeontx: not in enabled drivers build config 00:01:42.364 bus/auxiliary: not in enabled drivers build config 00:01:42.364 bus/cdx: not in enabled drivers build config 00:01:42.364 bus/dpaa: not in enabled drivers build config 00:01:42.364 bus/fslmc: not in enabled drivers build config 00:01:42.364 bus/ifpga: not in enabled drivers build config 00:01:42.364 bus/platform: not in enabled drivers build config 00:01:42.364 bus/vmbus: not in enabled drivers build config 00:01:42.364 common/cnxk: not in enabled drivers build config 00:01:42.364 common/mlx5: not in enabled drivers build config 00:01:42.364 common/nfp: not in enabled drivers build config 00:01:42.364 common/qat: not in enabled drivers build config 00:01:42.364 common/sfc_efx: not in enabled drivers build config 00:01:42.364 mempool/bucket: not in enabled drivers build config 00:01:42.364 mempool/cnxk: not in enabled drivers build config 00:01:42.364 mempool/dpaa: not in enabled drivers build config 00:01:42.364 mempool/dpaa2: not in enabled drivers build config 00:01:42.364 mempool/octeontx: not in enabled drivers build config 00:01:42.364 mempool/stack: not in enabled drivers build config 00:01:42.364 dma/cnxk: not in enabled drivers build config 00:01:42.364 dma/dpaa: not in enabled drivers build config 00:01:42.364 dma/dpaa2: not in enabled drivers build config 00:01:42.364 dma/hisilicon: not in enabled drivers build config 00:01:42.364 dma/idxd: not in enabled drivers build config 00:01:42.364 dma/ioat: not in enabled drivers build config 00:01:42.364 dma/skeleton: not in enabled drivers build config 00:01:42.364 net/af_packet: not in enabled drivers build config 00:01:42.364 net/af_xdp: not in enabled drivers build config 00:01:42.364 net/ark: not in enabled drivers build config 00:01:42.364 net/atlantic: not in enabled drivers build config 00:01:42.364 net/avp: not in enabled drivers build config 00:01:42.364 net/axgbe: not in enabled drivers build config 00:01:42.364 net/bnx2x: not in enabled drivers build config 00:01:42.364 net/bnxt: not in enabled drivers build config 00:01:42.364 net/bonding: not in enabled drivers build config 00:01:42.364 net/cnxk: not in enabled drivers build config 00:01:42.365 net/cpfl: not in enabled drivers build config 00:01:42.365 net/cxgbe: not in enabled drivers build config 00:01:42.365 net/dpaa: not in enabled drivers build config 00:01:42.365 net/dpaa2: not in enabled drivers build config 00:01:42.365 net/e1000: not in enabled drivers build config 00:01:42.365 net/ena: not in enabled drivers build config 00:01:42.365 net/enetc: not in enabled drivers build config 00:01:42.365 net/enetfec: not in enabled drivers build config 00:01:42.365 net/enic: not in enabled drivers build config 00:01:42.365 net/failsafe: not in enabled drivers build config 00:01:42.365 net/fm10k: not in enabled drivers build config 00:01:42.365 net/gve: not in enabled drivers build config 00:01:42.365 net/hinic: not in enabled drivers build config 00:01:42.365 net/hns3: not in enabled drivers build config 00:01:42.365 net/iavf: not in enabled drivers build config 00:01:42.365 net/ice: not in enabled drivers build config 00:01:42.365 net/idpf: not in enabled drivers build config 00:01:42.365 net/igc: not in enabled drivers build config 00:01:42.365 net/ionic: not in enabled drivers build config 00:01:42.365 net/ipn3ke: not in enabled drivers build config 00:01:42.365 net/ixgbe: not in enabled drivers build config 00:01:42.365 net/mana: not in enabled drivers build config 00:01:42.365 net/memif: not in enabled drivers build config 00:01:42.365 net/mlx4: not in enabled drivers build config 00:01:42.365 net/mlx5: not in enabled drivers build config 00:01:42.365 net/mvneta: not in enabled drivers build config 00:01:42.365 net/mvpp2: not in enabled drivers build config 00:01:42.365 net/netvsc: not in enabled drivers build config 00:01:42.365 net/nfb: not in enabled drivers build config 00:01:42.365 net/nfp: not in enabled drivers build config 00:01:42.365 net/ngbe: not in enabled drivers build config 00:01:42.365 net/null: not in enabled drivers build config 00:01:42.365 net/octeontx: not in enabled drivers build config 00:01:42.365 net/octeon_ep: not in enabled drivers build config 00:01:42.365 net/pcap: not in enabled drivers build config 00:01:42.365 net/pfe: not in enabled drivers build config 00:01:42.365 net/qede: not in enabled drivers build config 00:01:42.365 net/ring: not in enabled drivers build config 00:01:42.365 net/sfc: not in enabled drivers build config 00:01:42.365 net/softnic: not in enabled drivers build config 00:01:42.365 net/tap: not in enabled drivers build config 00:01:42.365 net/thunderx: not in enabled drivers build config 00:01:42.365 net/txgbe: not in enabled drivers build config 00:01:42.365 net/vdev_netvsc: not in enabled drivers build config 00:01:42.365 net/vhost: not in enabled drivers build config 00:01:42.365 net/virtio: not in enabled drivers build config 00:01:42.365 net/vmxnet3: not in enabled drivers build config 00:01:42.365 raw/cnxk_bphy: not in enabled drivers build config 00:01:42.365 raw/cnxk_gpio: not in enabled drivers build config 00:01:42.365 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:42.365 raw/ifpga: not in enabled drivers build config 00:01:42.365 raw/ntb: not in enabled drivers build config 00:01:42.365 raw/skeleton: not in enabled drivers build config 00:01:42.365 crypto/armv8: not in enabled drivers build config 00:01:42.365 crypto/bcmfs: not in enabled drivers build config 00:01:42.365 crypto/caam_jr: not in enabled drivers build config 00:01:42.365 crypto/ccp: not in enabled drivers build config 00:01:42.365 crypto/cnxk: not in enabled drivers build config 00:01:42.365 crypto/dpaa_sec: not in enabled drivers build config 00:01:42.365 crypto/dpaa2_sec: not in enabled drivers build config 00:01:42.365 crypto/ipsec_mb: not in enabled drivers build config 00:01:42.365 crypto/mlx5: not in enabled drivers build config 00:01:42.365 crypto/mvsam: not in enabled drivers build config 00:01:42.365 crypto/nitrox: not in enabled drivers build config 00:01:42.365 crypto/null: not in enabled drivers build config 00:01:42.365 crypto/octeontx: not in enabled drivers build config 00:01:42.365 crypto/openssl: not in enabled drivers build config 00:01:42.365 crypto/scheduler: not in enabled drivers build config 00:01:42.365 crypto/uadk: not in enabled drivers build config 00:01:42.365 crypto/virtio: not in enabled drivers build config 00:01:42.365 compress/isal: not in enabled drivers build config 00:01:42.365 compress/mlx5: not in enabled drivers build config 00:01:42.365 compress/octeontx: not in enabled drivers build config 00:01:42.365 compress/zlib: not in enabled drivers build config 00:01:42.365 regex/mlx5: not in enabled drivers build config 00:01:42.365 regex/cn9k: not in enabled drivers build config 00:01:42.365 ml/cnxk: not in enabled drivers build config 00:01:42.365 vdpa/ifc: not in enabled drivers build config 00:01:42.365 vdpa/mlx5: not in enabled drivers build config 00:01:42.365 vdpa/nfp: not in enabled drivers build config 00:01:42.365 vdpa/sfc: not in enabled drivers build config 00:01:42.365 event/cnxk: not in enabled drivers build config 00:01:42.365 event/dlb2: not in enabled drivers build config 00:01:42.365 event/dpaa: not in enabled drivers build config 00:01:42.365 event/dpaa2: not in enabled drivers build config 00:01:42.365 event/dsw: not in enabled drivers build config 00:01:42.365 event/opdl: not in enabled drivers build config 00:01:42.365 event/skeleton: not in enabled drivers build config 00:01:42.365 event/sw: not in enabled drivers build config 00:01:42.365 event/octeontx: not in enabled drivers build config 00:01:42.365 baseband/acc: not in enabled drivers build config 00:01:42.365 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:42.365 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:42.365 baseband/la12xx: not in enabled drivers build config 00:01:42.365 baseband/null: not in enabled drivers build config 00:01:42.365 baseband/turbo_sw: not in enabled drivers build config 00:01:42.365 gpu/cuda: not in enabled drivers build config 00:01:42.365 00:01:42.365 00:01:42.365 Build targets in project: 220 00:01:42.365 00:01:42.365 DPDK 23.11.0 00:01:42.365 00:01:42.365 User defined options 00:01:42.365 libdir : lib 00:01:42.365 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.365 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:42.365 c_link_args : 00:01:42.365 enable_docs : false 00:01:42.365 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:42.365 enable_kmods : false 00:01:42.365 machine : native 00:01:42.365 tests : false 00:01:42.365 00:01:42.365 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:42.365 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:42.365 01:22:55 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:42.365 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:42.365 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:42.365 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:42.365 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:42.365 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:42.365 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:42.365 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:42.365 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:42.365 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:42.365 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:42.365 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:42.365 [11/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:42.365 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:42.365 [13/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:42.365 [14/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:42.365 [15/710] Linking static target lib/librte_kvargs.a 00:01:42.626 [16/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:42.626 [17/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:42.626 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:42.626 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:42.626 [20/710] Linking static target lib/librte_log.a 00:01:42.626 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:42.889 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.469 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.469 [24/710] Linking target lib/librte_log.so.24.0 00:01:43.469 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:43.469 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:43.469 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:43.469 [28/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:43.469 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:43.469 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:43.469 [31/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:43.469 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:43.469 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:43.469 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:43.469 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:43.469 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:43.469 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:43.469 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:43.469 [39/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:43.469 [40/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:43.469 [41/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:43.469 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:43.469 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:43.469 [44/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:43.469 [45/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:43.469 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:43.469 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:43.730 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:43.730 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:43.730 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:43.730 [51/710] Linking target lib/librte_kvargs.so.24.0 00:01:43.730 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:43.730 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:43.730 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:43.730 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:43.730 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:43.730 [57/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:43.730 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:43.730 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:43.730 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:43.730 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:43.730 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:43.730 [63/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:43.730 [64/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:43.990 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:43.990 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.252 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:44.252 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.252 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.252 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.252 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:44.252 [72/710] Linking static target lib/librte_pci.a 00:01:44.252 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:44.252 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:44.252 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.515 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:44.515 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:44.515 [78/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:44.515 [79/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.515 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:44.515 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:44.515 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:44.515 [83/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:44.515 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:44.515 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:44.515 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:44.515 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:44.776 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:44.776 [89/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:44.776 [90/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:44.776 [91/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:44.776 [92/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:44.776 [93/710] Linking static target lib/librte_ring.a 00:01:44.776 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:44.776 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:44.776 [96/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:44.776 [97/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:44.776 [98/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:44.776 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:44.776 [100/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:44.776 [101/710] Linking static target lib/librte_meter.a 00:01:45.044 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.044 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.044 [104/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:45.044 [105/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:45.044 [106/710] Linking static target lib/librte_telemetry.a 00:01:45.044 [107/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.044 [108/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.044 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.044 [110/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.044 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.044 [112/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.044 [113/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.044 [114/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.044 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.308 [116/710] Linking static target lib/librte_eal.a 00:01:45.308 [117/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:45.308 [118/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.308 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:45.308 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:45.308 [121/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:45.308 [122/710] Linking static target lib/librte_net.a 00:01:45.308 [123/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:45.308 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:45.308 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:45.569 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:45.569 [127/710] Linking static target lib/librte_cmdline.a 00:01:45.569 [128/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.569 [129/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.569 [130/710] Linking static target lib/librte_mempool.a 00:01:45.569 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:45.569 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:45.569 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:45.832 [134/710] Linking static target lib/librte_cfgfile.a 00:01:45.832 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.832 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:45.832 [137/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:45.832 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:45.832 [139/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:45.832 [140/710] Linking static target lib/librte_metrics.a 00:01:45.832 [141/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:45.832 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:45.832 [143/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:46.092 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:46.092 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:46.092 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:46.092 [147/710] Linking static target lib/librte_bitratestats.a 00:01:46.092 [148/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:46.093 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:46.093 [150/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:46.357 [151/710] Linking static target lib/librte_rcu.a 00:01:46.357 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:46.357 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.357 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.357 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:46.357 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.357 [157/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:46.357 [158/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.357 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.357 [160/710] Linking static target lib/librte_timer.a 00:01:46.620 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.620 [162/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:46.620 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.620 [164/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.620 [165/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:46.620 [166/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.620 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:46.881 [168/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.881 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:46.881 [170/710] Linking static target lib/librte_bbdev.a 00:01:46.881 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.881 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:46.881 [173/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.881 [174/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.881 [175/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.881 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.144 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.144 [178/710] Linking static target lib/librte_compressdev.a 00:01:47.144 [179/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:47.144 [180/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:47.408 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:47.408 [182/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:47.408 [183/710] Linking static target lib/librte_distributor.a 00:01:47.408 [184/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:47.408 [185/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:47.408 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:47.670 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.670 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:47.670 [189/710] Linking static target lib/librte_bpf.a 00:01:47.670 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:47.670 [191/710] Linking static target lib/librte_dmadev.a 00:01:47.933 [192/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.933 [193/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:47.933 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.933 [195/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:47.933 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:47.933 [197/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:47.933 [198/710] Linking static target lib/librte_dispatcher.a 00:01:47.933 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:47.933 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:47.933 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:47.933 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:48.195 [203/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.195 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:48.195 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:48.195 [206/710] Linking static target lib/librte_gpudev.a 00:01:48.195 [207/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.195 [208/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:48.195 [209/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:48.195 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:48.195 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.195 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:48.195 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.195 [214/710] Linking static target lib/librte_gro.a 00:01:48.195 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:48.458 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.458 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:48.458 [218/710] Linking static target lib/librte_jobstats.a 00:01:48.458 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:48.458 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:48.724 [221/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.724 [222/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.724 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:48.988 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:48.988 [225/710] Linking static target lib/librte_latencystats.a 00:01:48.988 [226/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:48.988 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:48.988 [228/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.988 [229/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:48.988 [230/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:48.988 [231/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:48.988 [232/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:48.988 [233/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:49.252 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:49.252 [235/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:49.252 [236/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:49.252 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.252 [238/710] Linking static target lib/librte_ip_frag.a 00:01:49.252 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:49.252 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:49.252 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:49.514 [242/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.514 [243/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:49.514 [244/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:49.514 [245/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:49.776 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.776 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:49.776 [248/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:49.776 [249/710] Linking static target lib/librte_gso.a 00:01:49.776 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:49.776 [251/710] Linking static target lib/librte_regexdev.a 00:01:49.777 [252/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:49.777 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:49.777 [254/710] Linking static target lib/librte_rawdev.a 00:01:49.777 [255/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:49.777 [256/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:49.777 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:50.037 [258/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:50.037 [259/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.037 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:50.037 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:50.037 [262/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:50.037 [263/710] Linking static target lib/librte_mldev.a 00:01:50.302 [264/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:50.302 [265/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:50.302 [266/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:50.302 [267/710] Linking static target lib/librte_pcapng.a 00:01:50.302 [268/710] Linking static target lib/librte_efd.a 00:01:50.302 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:50.302 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:50.302 [271/710] Linking static target lib/librte_stack.a 00:01:50.568 [272/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:50.568 [273/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:50.568 [274/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:50.568 [275/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:50.568 [276/710] Linking static target lib/acl/libavx2_tmp.a 00:01:50.568 [277/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:50.568 [278/710] Linking static target lib/librte_lpm.a 00:01:50.568 [279/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:50.568 [280/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.568 [281/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.568 [282/710] Linking static target lib/librte_hash.a 00:01:50.568 [283/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.568 [284/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:50.568 [285/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:50.568 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.568 [287/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:50.851 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:50.851 [289/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:50.851 [290/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:50.851 [291/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:50.851 [292/710] Linking static target lib/librte_power.a 00:01:50.851 [293/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:50.851 [294/710] Linking static target lib/librte_security.a 00:01:50.851 [295/710] Linking static target lib/acl/libavx512_tmp.a 00:01:50.851 [296/710] Linking static target lib/librte_reorder.a 00:01:50.851 [297/710] Linking static target lib/librte_acl.a 00:01:50.851 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.851 [299/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:51.117 [300/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.117 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:51.117 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:51.117 [303/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:51.379 [304/710] Linking static target lib/librte_mbuf.a 00:01:51.379 [305/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:51.379 [306/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:51.379 [307/710] Linking static target lib/librte_rib.a 00:01:51.379 [308/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:51.379 [309/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:51.379 [310/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:51.379 [311/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.379 [312/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.379 [313/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.379 [314/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:51.643 [315/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.643 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:51.643 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:51.643 [318/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:51.643 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:51.643 [320/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:51.905 [321/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:51.905 [322/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:51.905 [323/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:51.905 [324/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:51.905 [325/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:51.905 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.905 [327/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.905 [328/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.168 [329/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.168 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:52.168 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:52.431 [332/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:52.431 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:52.431 [334/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:52.431 [335/710] Linking static target lib/librte_member.a 00:01:52.697 [336/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:52.697 [337/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:52.697 [338/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:52.697 [339/710] Linking static target lib/librte_eventdev.a 00:01:52.697 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:52.697 [341/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:52.957 [342/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:52.957 [343/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:52.957 [344/710] Linking static target lib/librte_cryptodev.a 00:01:52.957 [345/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:52.957 [346/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:52.957 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:52.957 [348/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:52.957 [349/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:52.957 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:52.957 [351/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:52.957 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.223 [353/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:53.223 [354/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:53.223 [355/710] Linking static target lib/librte_sched.a 00:01:53.223 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:53.223 [357/710] Linking static target lib/librte_fib.a 00:01:53.223 [358/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:53.223 [359/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:53.223 [360/710] Linking static target lib/librte_ethdev.a 00:01:53.223 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:53.223 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:53.487 [363/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:53.487 [364/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:53.488 [365/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:53.488 [366/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:53.488 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:53.748 [368/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.748 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:53.748 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.748 [371/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:53.748 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:53.748 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:54.012 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:54.012 [375/710] Linking static target lib/librte_pdump.a 00:01:54.012 [376/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:54.012 [377/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:54.276 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:54.276 [379/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:54.276 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:54.276 [381/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:54.276 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:54.276 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:54.276 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:54.276 [385/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:54.276 [386/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:54.276 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:54.276 [388/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:54.276 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:54.544 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.544 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:54.544 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:54.544 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:54.803 [394/710] Linking static target lib/librte_ipsec.a 00:01:54.803 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.803 [396/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:54.803 [397/710] Linking static target lib/librte_table.a 00:01:54.803 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:54.803 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:55.074 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:55.074 [401/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:55.074 [402/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.333 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:55.333 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.597 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:55.597 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:55.597 [407/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:55.597 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.597 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.597 [410/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.597 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.597 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.597 [413/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.863 [414/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:55.863 [415/710] Linking target lib/librte_eal.so.24.0 00:01:55.863 [416/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:55.863 [417/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.863 [418/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:55.863 [419/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.863 [420/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.863 [421/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.125 [422/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:56.125 [423/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.125 [424/710] Linking target lib/librte_ring.so.24.0 00:01:56.125 [425/710] Linking target lib/librte_meter.so.24.0 00:01:56.125 [426/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:56.387 [427/710] Linking target lib/librte_pci.so.24.0 00:01:56.387 [428/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:56.387 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:56.387 [430/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:56.387 [431/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:56.387 [432/710] Linking target lib/librte_timer.so.24.0 00:01:56.387 [433/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.387 [434/710] Linking target lib/librte_acl.so.24.0 00:01:56.387 [435/710] Linking target lib/librte_rcu.so.24.0 00:01:56.387 [436/710] Linking target lib/librte_cfgfile.so.24.0 00:01:56.387 [437/710] Linking target lib/librte_mempool.so.24.0 00:01:56.387 [438/710] Linking target lib/librte_dmadev.so.24.0 00:01:56.387 [439/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:56.650 [440/710] Linking target lib/librte_jobstats.so.24.0 00:01:56.650 [441/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:56.650 [442/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:56.650 [443/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:56.650 [444/710] Linking static target lib/librte_port.a 00:01:56.650 [445/710] Linking static target lib/librte_graph.a 00:01:56.650 [446/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:56.650 [447/710] Linking target lib/librte_rawdev.so.24.0 00:01:56.650 [448/710] Linking target lib/librte_stack.so.24.0 00:01:56.650 [449/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:56.650 [450/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:56.650 [451/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:56.650 [452/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.650 [453/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:56.650 [454/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.650 [455/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.650 [456/710] Linking static target drivers/librte_bus_pci.a 00:01:56.650 [457/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.650 [458/710] Linking static target drivers/librte_bus_vdev.a 00:01:56.650 [459/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:56.650 [460/710] Linking target lib/librte_mbuf.so.24.0 00:01:56.650 [461/710] Linking target lib/librte_rib.so.24.0 00:01:56.917 [462/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:56.917 [463/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:56.917 [464/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:56.917 [465/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:56.917 [466/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:57.186 [467/710] Linking target lib/librte_fib.so.24.0 00:01:57.186 [468/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:57.187 [469/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.187 [470/710] Linking target lib/librte_net.so.24.0 00:01:57.187 [471/710] Linking target lib/librte_bbdev.so.24.0 00:01:57.187 [472/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:57.187 [473/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:57.187 [474/710] Linking target lib/librte_compressdev.so.24.0 00:01:57.187 [475/710] Linking target lib/librte_gpudev.so.24.0 00:01:57.187 [476/710] Linking target lib/librte_cryptodev.so.24.0 00:01:57.187 [477/710] Linking target lib/librte_distributor.so.24.0 00:01:57.187 [478/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:57.187 [479/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.187 [480/710] Linking target lib/librte_mldev.so.24.0 00:01:57.187 [481/710] Linking target lib/librte_regexdev.so.24.0 00:01:57.187 [482/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.187 [483/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.187 [484/710] Linking target lib/librte_reorder.so.24.0 00:01:57.187 [485/710] Linking target lib/librte_sched.so.24.0 00:01:57.187 [486/710] Linking static target drivers/librte_mempool_ring.a 00:01:57.187 [487/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:57.449 [488/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:57.449 [489/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:57.449 [490/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:57.449 [491/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:57.449 [492/710] Linking target lib/librte_hash.so.24.0 00:01:57.449 [493/710] Linking target lib/librte_cmdline.so.24.0 00:01:57.449 [494/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:57.449 [495/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.449 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:57.449 [497/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.449 [498/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:57.449 [499/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:57.449 [500/710] Linking target lib/librte_security.so.24.0 00:01:57.449 [501/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:57.449 [502/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:57.714 [503/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:57.714 [504/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:57.714 [505/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:57.714 [506/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:57.714 [507/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.714 [508/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:57.714 [509/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:57.714 [510/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:57.714 [511/710] Linking target lib/librte_efd.so.24.0 00:01:57.714 [512/710] Linking target lib/librte_lpm.so.24.0 00:01:57.714 [513/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:57.714 [514/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:57.714 [515/710] Linking target lib/librte_member.so.24.0 00:01:57.976 [516/710] Linking target lib/librte_ipsec.so.24.0 00:01:57.976 [517/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:57.976 [518/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:57.976 [519/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:57.976 [520/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:57.976 [521/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:57.976 [522/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:57.976 [523/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:58.239 [524/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:58.506 [525/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:58.506 [526/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:58.506 [527/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:58.771 [528/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:58.771 [529/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:58.771 [530/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:58.771 [531/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:58.771 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:59.032 [533/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:59.032 [534/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:59.032 [535/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:59.032 [536/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:59.304 [537/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:59.304 [538/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:59.304 [539/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:59.304 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:59.304 [541/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:59.304 [542/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:59.565 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:59.565 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:59.831 [545/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:59.831 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:59.831 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:59.831 [548/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:00.092 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:00.092 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:00.092 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:00.092 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:00.092 [553/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:00.092 [554/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:00.092 [555/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:00.092 [556/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:00.092 [557/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:00.358 [558/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:00.358 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:00.619 [560/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.619 [561/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:00.619 [562/710] Linking target lib/librte_ethdev.so.24.0 00:02:00.883 [563/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:00.883 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:00.883 [565/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:00.883 [566/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:00.883 [567/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:01.150 [568/710] Linking target lib/librte_metrics.so.24.0 00:02:01.150 [569/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:01.150 [570/710] Linking target lib/librte_bpf.so.24.0 00:02:01.150 [571/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:01.150 [572/710] Linking target lib/librte_eventdev.so.24.0 00:02:01.150 [573/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:01.150 [574/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:01.150 [575/710] Linking target lib/librte_gro.so.24.0 00:02:01.150 [576/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:01.414 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:01.414 [578/710] Linking target lib/librte_gso.so.24.0 00:02:01.414 [579/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:01.414 [580/710] Linking target lib/librte_ip_frag.so.24.0 00:02:01.414 [581/710] Linking target lib/librte_pcapng.so.24.0 00:02:01.414 [582/710] Linking target lib/librte_bitratestats.so.24.0 00:02:01.414 [583/710] Linking target lib/librte_latencystats.so.24.0 00:02:01.414 [584/710] Linking target lib/librte_power.so.24.0 00:02:01.414 [585/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:01.414 [586/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:01.414 [587/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:01.414 [588/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:01.414 [589/710] Linking target lib/librte_dispatcher.so.24.0 00:02:01.414 [590/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:01.414 [591/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:01.414 [592/710] Linking static target lib/librte_pdcp.a 00:02:01.685 [593/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:01.685 [594/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:01.685 [595/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:01.685 [596/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:01.685 [597/710] Linking target lib/librte_pdump.so.24.0 00:02:01.685 [598/710] Linking target lib/librte_port.so.24.0 00:02:01.685 [599/710] Linking target lib/librte_graph.so.24.0 00:02:01.685 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:01.685 [601/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:01.685 [602/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:01.953 [603/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:01.953 [604/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:01.953 [605/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:01.953 [606/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:01.953 [607/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:01.953 [608/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:01.953 [609/710] Linking target lib/librte_table.so.24.0 00:02:01.953 [610/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.215 [611/710] Linking target lib/librte_pdcp.so.24.0 00:02:02.215 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:02.215 [613/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:02.215 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:02.215 [615/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:02.215 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:02.477 [617/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:02.477 [618/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:02.742 [619/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:02.742 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:02.742 [621/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:02.742 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:02.742 [623/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:03.004 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:03.004 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:03.004 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:03.004 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:03.004 [628/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:03.265 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:03.265 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:03.265 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:03.524 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:03.524 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:03.524 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:03.524 [635/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:03.524 [636/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:03.524 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:03.784 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:03.784 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:03.784 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:03.784 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:03.784 [642/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:04.043 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:04.043 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:04.303 [645/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:04.303 [646/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:04.303 [647/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:04.303 [648/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:04.303 [649/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:04.562 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:04.562 [651/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:04.562 [652/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:04.562 [653/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:04.562 [654/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:04.562 [655/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:04.851 [656/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:04.851 [657/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:05.116 [658/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:05.116 [659/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:05.116 [660/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:05.116 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:05.116 [662/710] Linking static target drivers/librte_net_i40e.a 00:02:05.374 [663/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:05.375 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:05.633 [665/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:05.633 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.633 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:05.633 [668/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:05.633 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:05.892 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:06.458 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:06.458 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:06.458 [673/710] Linking static target lib/librte_node.a 00:02:06.717 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.717 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:06.975 [676/710] Linking target lib/librte_node.so.24.0 00:02:07.907 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:07.907 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:07.907 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:09.806 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:10.064 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:15.329 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:47.396 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.396 [684/710] Linking static target lib/librte_vhost.a 00:02:47.396 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.396 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:02.303 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:02.303 [688/710] Linking static target lib/librte_pipeline.a 00:03:02.561 [689/710] Linking target app/dpdk-dumpcap 00:03:02.561 [690/710] Linking target app/dpdk-test-acl 00:03:02.561 [691/710] Linking target app/dpdk-proc-info 00:03:02.561 [692/710] Linking target app/dpdk-pdump 00:03:02.561 [693/710] Linking target app/dpdk-test-regex 00:03:02.561 [694/710] Linking target app/dpdk-test-sad 00:03:02.561 [695/710] Linking target app/dpdk-test-bbdev 00:03:02.561 [696/710] Linking target app/dpdk-test-gpudev 00:03:02.561 [697/710] Linking target app/dpdk-test-dma-perf 00:03:02.561 [698/710] Linking target app/dpdk-test-fib 00:03:02.561 [699/710] Linking target app/dpdk-test-cmdline 00:03:02.561 [700/710] Linking target app/dpdk-test-pipeline 00:03:02.561 [701/710] Linking target app/dpdk-test-flow-perf 00:03:02.561 [702/710] Linking target app/dpdk-test-compress-perf 00:03:02.561 [703/710] Linking target app/dpdk-test-security-perf 00:03:02.561 [704/710] Linking target app/dpdk-test-crypto-perf 00:03:02.561 [705/710] Linking target app/dpdk-graph 00:03:02.561 [706/710] Linking target app/dpdk-test-mldev 00:03:02.561 [707/710] Linking target app/dpdk-test-eventdev 00:03:02.561 [708/710] Linking target app/dpdk-testpmd 00:03:04.463 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.463 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:04.463 01:24:17 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:04.721 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:04.721 [0/1] Installing files. 00:03:04.984 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.984 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.985 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.986 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.987 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.988 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:04.989 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:04.989 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.989 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:04.990 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.560 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:05.561 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:05.561 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:05.561 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.561 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:05.561 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.561 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.562 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.563 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.564 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:05.565 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:05.565 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:05.565 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:05.565 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:05.565 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:05.565 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:05.565 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:05.565 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:05.565 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:05.565 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:05.565 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:05.565 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:05.565 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:05.565 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:05.565 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:05.565 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:05.565 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:05.565 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:05.565 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:05.565 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:05.565 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:05.565 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:05.565 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:05.565 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:05.565 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:05.565 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:05.565 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:05.565 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:05.565 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:05.565 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:05.565 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:05.565 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:05.565 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:05.565 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:05.565 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:05.565 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:05.565 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:05.565 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:05.565 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:05.565 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:05.566 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:05.566 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:05.566 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:05.566 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:05.566 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:05.566 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:05.566 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:05.566 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:05.566 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:05.566 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:05.566 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:05.566 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:05.566 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:05.566 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:05.566 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:05.566 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:05.566 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:05.566 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:05.566 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:05.566 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:05.566 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:05.566 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:05.566 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:05.566 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:05.566 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:05.566 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:05.566 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:05.566 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:05.566 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:05.566 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:05.566 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:05.566 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:05.566 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:05.566 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:05.566 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:05.566 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:05.566 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:05.825 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:05.825 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:05.825 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:05.825 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:05.825 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:05.825 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:05.825 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:05.825 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:05.825 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:05.825 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:05.825 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:05.825 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:05.825 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:05.825 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:05.825 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:05.825 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:05.825 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:05.825 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:05.826 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:05.826 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:05.826 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:05.826 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:05.826 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:05.826 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:05.826 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:05.826 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:05.826 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:05.826 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:05.826 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:05.826 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:05.826 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:05.826 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:05.826 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:05.826 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:05.826 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:05.826 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:05.826 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:05.826 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:05.826 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:05.826 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:05.826 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:05.826 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:05.826 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:05.826 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:05.826 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:05.826 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:05.826 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:05.826 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:05.826 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:05.826 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:05.826 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:05.826 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:05.826 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:05.826 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:05.826 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:05.826 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:05.826 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:05.826 01:24:18 -- common/autobuild_common.sh@192 -- $ uname -s 00:03:05.826 01:24:18 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:05.826 01:24:18 -- common/autobuild_common.sh@203 -- $ cat 00:03:05.826 01:24:18 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:05.826 00:03:05.826 real 1m29.010s 00:03:05.826 user 17m56.302s 00:03:05.826 sys 2m6.024s 00:03:05.826 01:24:18 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:05.826 01:24:18 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.826 ************************************ 00:03:05.826 END TEST build_native_dpdk 00:03:05.826 ************************************ 00:03:05.826 01:24:18 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:05.826 01:24:18 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:05.826 01:24:18 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:05.826 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:05.826 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:05.826 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:06.084 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:06.343 Using 'verbs' RDMA provider 00:03:16.569 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:26.539 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:26.539 Creating mk/config.mk...done. 00:03:26.539 Creating mk/cc.flags.mk...done. 00:03:26.539 Type 'make' to build. 00:03:26.539 01:24:38 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:26.539 01:24:38 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:26.539 01:24:38 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:26.539 01:24:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:26.539 ************************************ 00:03:26.539 START TEST make 00:03:26.539 ************************************ 00:03:26.539 01:24:38 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:26.539 make[1]: Nothing to be done for 'all'. 00:03:26.802 The Meson build system 00:03:26.802 Version: 1.3.1 00:03:26.802 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:26.802 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:26.802 Build type: native build 00:03:26.802 Project name: libvfio-user 00:03:26.802 Project version: 0.0.1 00:03:26.802 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:26.802 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:26.802 Host machine cpu family: x86_64 00:03:26.802 Host machine cpu: x86_64 00:03:26.802 Run-time dependency threads found: YES 00:03:26.802 Library dl found: YES 00:03:26.802 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:26.802 Run-time dependency json-c found: YES 0.17 00:03:26.802 Run-time dependency cmocka found: YES 1.1.7 00:03:26.802 Program pytest-3 found: NO 00:03:26.802 Program flake8 found: NO 00:03:26.802 Program misspell-fixer found: NO 00:03:26.802 Program restructuredtext-lint found: NO 00:03:26.802 Program valgrind found: YES (/usr/bin/valgrind) 00:03:26.802 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:26.802 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:26.802 Compiler for C supports arguments -Wwrite-strings: YES 00:03:26.802 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:26.802 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:26.802 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:26.802 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:26.802 Build targets in project: 8 00:03:26.802 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:26.802 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:26.802 00:03:26.802 libvfio-user 0.0.1 00:03:26.802 00:03:26.802 User defined options 00:03:26.802 buildtype : debug 00:03:26.802 default_library: shared 00:03:26.802 libdir : /usr/local/lib 00:03:26.802 00:03:26.802 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:27.754 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:27.754 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:27.754 [2/37] Compiling C object samples/null.p/null.c.o 00:03:27.754 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:27.754 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:27.754 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:27.754 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:27.754 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:27.754 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:27.754 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:27.754 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:27.754 [11/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:27.754 [12/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:28.018 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:28.018 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:28.018 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:28.018 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:28.018 [17/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:28.018 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:28.018 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:28.018 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:28.018 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:28.018 [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:28.018 [23/37] Compiling C object samples/server.p/server.c.o 00:03:28.018 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:28.018 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:28.018 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:28.018 [27/37] Compiling C object samples/client.p/client.c.o 00:03:28.018 [28/37] Linking target samples/client 00:03:28.018 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:28.285 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:28.285 [31/37] Linking target test/unit_tests 00:03:28.285 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:28.285 [33/37] Linking target samples/gpio-pci-idio-16 00:03:28.285 [34/37] Linking target samples/null 00:03:28.285 [35/37] Linking target samples/shadow_ioeventfd_server 00:03:28.285 [36/37] Linking target samples/lspci 00:03:28.285 [37/37] Linking target samples/server 00:03:28.285 INFO: autodetecting backend as ninja 00:03:28.285 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:28.552 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:29.126 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:29.126 ninja: no work to do. 00:03:41.391 CC lib/log/log.o 00:03:41.391 CC lib/log/log_flags.o 00:03:41.391 CC lib/log/log_deprecated.o 00:03:41.391 CC lib/ut_mock/mock.o 00:03:41.391 CC lib/ut/ut.o 00:03:41.391 LIB libspdk_ut_mock.a 00:03:41.391 LIB libspdk_log.a 00:03:41.391 SO libspdk_ut_mock.so.5.0 00:03:41.391 LIB libspdk_ut.a 00:03:41.391 SO libspdk_log.so.6.1 00:03:41.391 SO libspdk_ut.so.1.0 00:03:41.391 SYMLINK libspdk_ut_mock.so 00:03:41.391 SYMLINK libspdk_ut.so 00:03:41.391 SYMLINK libspdk_log.so 00:03:41.391 CC lib/dma/dma.o 00:03:41.391 CC lib/ioat/ioat.o 00:03:41.391 CC lib/util/base64.o 00:03:41.391 CC lib/util/bit_array.o 00:03:41.391 CC lib/util/cpuset.o 00:03:41.391 CXX lib/trace_parser/trace.o 00:03:41.391 CC lib/util/crc16.o 00:03:41.391 CC lib/util/crc32.o 00:03:41.391 CC lib/util/crc32c.o 00:03:41.391 CC lib/util/crc32_ieee.o 00:03:41.391 CC lib/util/crc64.o 00:03:41.391 CC lib/util/dif.o 00:03:41.391 CC lib/util/fd.o 00:03:41.391 CC lib/util/file.o 00:03:41.391 CC lib/util/hexlify.o 00:03:41.391 CC lib/util/iov.o 00:03:41.391 CC lib/util/math.o 00:03:41.391 CC lib/util/pipe.o 00:03:41.391 CC lib/util/strerror_tls.o 00:03:41.391 CC lib/util/string.o 00:03:41.391 CC lib/util/uuid.o 00:03:41.391 CC lib/util/fd_group.o 00:03:41.391 CC lib/util/xor.o 00:03:41.391 CC lib/util/zipf.o 00:03:41.391 CC lib/vfio_user/host/vfio_user_pci.o 00:03:41.391 CC lib/vfio_user/host/vfio_user.o 00:03:41.391 LIB libspdk_dma.a 00:03:41.391 SO libspdk_dma.so.3.0 00:03:41.391 SYMLINK libspdk_dma.so 00:03:41.391 LIB libspdk_ioat.a 00:03:41.391 SO libspdk_ioat.so.6.0 00:03:41.391 LIB libspdk_vfio_user.a 00:03:41.391 SO libspdk_vfio_user.so.4.0 00:03:41.391 SYMLINK libspdk_ioat.so 00:03:41.391 SYMLINK libspdk_vfio_user.so 00:03:41.391 LIB libspdk_util.a 00:03:41.391 SO libspdk_util.so.8.0 00:03:41.391 SYMLINK libspdk_util.so 00:03:41.391 CC lib/rdma/common.o 00:03:41.392 CC lib/idxd/idxd.o 00:03:41.392 CC lib/vmd/vmd.o 00:03:41.392 CC lib/conf/conf.o 00:03:41.392 CC lib/rdma/rdma_verbs.o 00:03:41.392 CC lib/idxd/idxd_user.o 00:03:41.392 CC lib/vmd/led.o 00:03:41.392 CC lib/env_dpdk/env.o 00:03:41.392 CC lib/env_dpdk/memory.o 00:03:41.392 CC lib/idxd/idxd_kernel.o 00:03:41.392 CC lib/json/json_parse.o 00:03:41.392 CC lib/env_dpdk/pci.o 00:03:41.392 CC lib/json/json_util.o 00:03:41.392 CC lib/env_dpdk/init.o 00:03:41.392 CC lib/json/json_write.o 00:03:41.392 CC lib/env_dpdk/threads.o 00:03:41.392 CC lib/env_dpdk/pci_ioat.o 00:03:41.392 CC lib/env_dpdk/pci_virtio.o 00:03:41.392 CC lib/env_dpdk/pci_vmd.o 00:03:41.392 CC lib/env_dpdk/pci_idxd.o 00:03:41.392 CC lib/env_dpdk/pci_event.o 00:03:41.392 CC lib/env_dpdk/sigbus_handler.o 00:03:41.392 CC lib/env_dpdk/pci_dpdk.o 00:03:41.392 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:41.392 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:41.651 LIB libspdk_trace_parser.a 00:03:41.651 SO libspdk_trace_parser.so.4.0 00:03:41.651 LIB libspdk_conf.a 00:03:41.651 SYMLINK libspdk_trace_parser.so 00:03:41.651 SO libspdk_conf.so.5.0 00:03:41.909 LIB libspdk_rdma.a 00:03:41.909 SYMLINK libspdk_conf.so 00:03:41.909 SO libspdk_rdma.so.5.0 00:03:41.909 LIB libspdk_json.a 00:03:41.909 SYMLINK libspdk_rdma.so 00:03:41.909 SO libspdk_json.so.5.1 00:03:41.909 SYMLINK libspdk_json.so 00:03:42.168 LIB libspdk_idxd.a 00:03:42.168 CC lib/jsonrpc/jsonrpc_server.o 00:03:42.168 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:42.168 CC lib/jsonrpc/jsonrpc_client.o 00:03:42.168 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:42.168 SO libspdk_idxd.so.11.0 00:03:42.168 LIB libspdk_vmd.a 00:03:42.168 SYMLINK libspdk_idxd.so 00:03:42.168 SO libspdk_vmd.so.5.0 00:03:42.168 SYMLINK libspdk_vmd.so 00:03:42.427 LIB libspdk_jsonrpc.a 00:03:42.427 SO libspdk_jsonrpc.so.5.1 00:03:42.427 SYMLINK libspdk_jsonrpc.so 00:03:42.427 CC lib/rpc/rpc.o 00:03:42.685 LIB libspdk_rpc.a 00:03:42.685 SO libspdk_rpc.so.5.0 00:03:42.685 SYMLINK libspdk_rpc.so 00:03:42.944 CC lib/sock/sock.o 00:03:42.944 CC lib/sock/sock_rpc.o 00:03:42.944 CC lib/trace/trace.o 00:03:42.944 CC lib/trace/trace_flags.o 00:03:42.944 CC lib/trace/trace_rpc.o 00:03:42.944 CC lib/notify/notify.o 00:03:42.944 CC lib/notify/notify_rpc.o 00:03:43.202 LIB libspdk_notify.a 00:03:43.202 SO libspdk_notify.so.5.0 00:03:43.202 LIB libspdk_trace.a 00:03:43.202 SYMLINK libspdk_notify.so 00:03:43.202 SO libspdk_trace.so.9.0 00:03:43.202 SYMLINK libspdk_trace.so 00:03:43.202 LIB libspdk_sock.a 00:03:43.202 SO libspdk_sock.so.8.0 00:03:43.462 CC lib/thread/thread.o 00:03:43.462 CC lib/thread/iobuf.o 00:03:43.462 SYMLINK libspdk_sock.so 00:03:43.462 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:43.462 CC lib/nvme/nvme_ctrlr.o 00:03:43.462 CC lib/nvme/nvme_fabric.o 00:03:43.462 CC lib/nvme/nvme_ns_cmd.o 00:03:43.462 CC lib/nvme/nvme_ns.o 00:03:43.462 CC lib/nvme/nvme_pcie_common.o 00:03:43.462 CC lib/nvme/nvme_pcie.o 00:03:43.462 CC lib/nvme/nvme_qpair.o 00:03:43.462 CC lib/nvme/nvme.o 00:03:43.462 CC lib/nvme/nvme_quirks.o 00:03:43.462 CC lib/nvme/nvme_transport.o 00:03:43.462 CC lib/nvme/nvme_discovery.o 00:03:43.462 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.462 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.462 CC lib/nvme/nvme_tcp.o 00:03:43.462 CC lib/nvme/nvme_opal.o 00:03:43.462 CC lib/nvme/nvme_io_msg.o 00:03:43.462 CC lib/nvme/nvme_poll_group.o 00:03:43.462 CC lib/nvme/nvme_zns.o 00:03:43.462 CC lib/nvme/nvme_cuse.o 00:03:43.462 CC lib/nvme/nvme_rdma.o 00:03:43.462 CC lib/nvme/nvme_vfio_user.o 00:03:43.462 LIB libspdk_env_dpdk.a 00:03:43.721 SO libspdk_env_dpdk.so.13.0 00:03:43.979 SYMLINK libspdk_env_dpdk.so 00:03:44.915 LIB libspdk_thread.a 00:03:44.915 SO libspdk_thread.so.9.0 00:03:44.915 SYMLINK libspdk_thread.so 00:03:45.174 CC lib/init/json_config.o 00:03:45.174 CC lib/vfu_tgt/tgt_endpoint.o 00:03:45.174 CC lib/virtio/virtio.o 00:03:45.174 CC lib/blob/blobstore.o 00:03:45.174 CC lib/accel/accel.o 00:03:45.174 CC lib/init/subsystem.o 00:03:45.174 CC lib/vfu_tgt/tgt_rpc.o 00:03:45.174 CC lib/virtio/virtio_vhost_user.o 00:03:45.174 CC lib/init/subsystem_rpc.o 00:03:45.174 CC lib/blob/request.o 00:03:45.174 CC lib/virtio/virtio_vfio_user.o 00:03:45.174 CC lib/accel/accel_rpc.o 00:03:45.174 CC lib/blob/zeroes.o 00:03:45.174 CC lib/init/rpc.o 00:03:45.174 CC lib/virtio/virtio_pci.o 00:03:45.174 CC lib/accel/accel_sw.o 00:03:45.174 CC lib/blob/blob_bs_dev.o 00:03:45.432 LIB libspdk_init.a 00:03:45.432 SO libspdk_init.so.4.0 00:03:45.432 LIB libspdk_virtio.a 00:03:45.432 SYMLINK libspdk_init.so 00:03:45.432 LIB libspdk_vfu_tgt.a 00:03:45.432 SO libspdk_virtio.so.6.0 00:03:45.432 SO libspdk_vfu_tgt.so.2.0 00:03:45.691 SYMLINK libspdk_vfu_tgt.so 00:03:45.691 SYMLINK libspdk_virtio.so 00:03:45.691 CC lib/event/app.o 00:03:45.691 CC lib/event/reactor.o 00:03:45.691 CC lib/event/log_rpc.o 00:03:45.691 CC lib/event/app_rpc.o 00:03:45.691 CC lib/event/scheduler_static.o 00:03:45.691 LIB libspdk_nvme.a 00:03:45.949 SO libspdk_nvme.so.12.0 00:03:45.949 LIB libspdk_event.a 00:03:45.949 SO libspdk_event.so.12.0 00:03:46.207 SYMLINK libspdk_event.so 00:03:46.207 SYMLINK libspdk_nvme.so 00:03:46.207 LIB libspdk_accel.a 00:03:46.207 SO libspdk_accel.so.14.0 00:03:46.207 SYMLINK libspdk_accel.so 00:03:46.465 CC lib/bdev/bdev.o 00:03:46.465 CC lib/bdev/bdev_rpc.o 00:03:46.465 CC lib/bdev/bdev_zone.o 00:03:46.465 CC lib/bdev/part.o 00:03:46.465 CC lib/bdev/scsi_nvme.o 00:03:47.840 LIB libspdk_blob.a 00:03:47.840 SO libspdk_blob.so.10.1 00:03:48.098 SYMLINK libspdk_blob.so 00:03:48.098 CC lib/blobfs/blobfs.o 00:03:48.098 CC lib/blobfs/tree.o 00:03:48.098 CC lib/lvol/lvol.o 00:03:49.032 LIB libspdk_lvol.a 00:03:49.032 SO libspdk_lvol.so.9.1 00:03:49.032 LIB libspdk_blobfs.a 00:03:49.032 SYMLINK libspdk_lvol.so 00:03:49.032 SO libspdk_blobfs.so.9.0 00:03:49.032 SYMLINK libspdk_blobfs.so 00:03:49.291 LIB libspdk_bdev.a 00:03:49.291 SO libspdk_bdev.so.14.0 00:03:49.291 SYMLINK libspdk_bdev.so 00:03:49.571 CC lib/ublk/ublk.o 00:03:49.571 CC lib/nbd/nbd.o 00:03:49.571 CC lib/nvmf/ctrlr.o 00:03:49.571 CC lib/ublk/ublk_rpc.o 00:03:49.571 CC lib/nbd/nbd_rpc.o 00:03:49.571 CC lib/scsi/dev.o 00:03:49.571 CC lib/nvmf/ctrlr_discovery.o 00:03:49.571 CC lib/scsi/lun.o 00:03:49.571 CC lib/ftl/ftl_core.o 00:03:49.571 CC lib/nvmf/ctrlr_bdev.o 00:03:49.571 CC lib/ftl/ftl_init.o 00:03:49.571 CC lib/scsi/port.o 00:03:49.571 CC lib/nvmf/subsystem.o 00:03:49.571 CC lib/ftl/ftl_layout.o 00:03:49.571 CC lib/scsi/scsi.o 00:03:49.571 CC lib/nvmf/nvmf.o 00:03:49.571 CC lib/ftl/ftl_debug.o 00:03:49.571 CC lib/scsi/scsi_bdev.o 00:03:49.571 CC lib/nvmf/nvmf_rpc.o 00:03:49.572 CC lib/ftl/ftl_io.o 00:03:49.572 CC lib/scsi/scsi_pr.o 00:03:49.572 CC lib/scsi/scsi_rpc.o 00:03:49.572 CC lib/nvmf/transport.o 00:03:49.572 CC lib/scsi/task.o 00:03:49.572 CC lib/ftl/ftl_sb.o 00:03:49.572 CC lib/nvmf/vfio_user.o 00:03:49.572 CC lib/nvmf/tcp.o 00:03:49.572 CC lib/ftl/ftl_l2p_flat.o 00:03:49.572 CC lib/ftl/ftl_l2p.o 00:03:49.572 CC lib/nvmf/rdma.o 00:03:49.572 CC lib/ftl/ftl_nv_cache.o 00:03:49.572 CC lib/ftl/ftl_band_ops.o 00:03:49.572 CC lib/ftl/ftl_band.o 00:03:49.572 CC lib/ftl/ftl_rq.o 00:03:49.572 CC lib/ftl/ftl_writer.o 00:03:49.572 CC lib/ftl/ftl_reloc.o 00:03:49.572 CC lib/ftl/ftl_l2p_cache.o 00:03:49.572 CC lib/ftl/ftl_p2l.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:49.572 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:49.831 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:49.831 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:49.831 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:49.831 CC lib/ftl/utils/ftl_conf.o 00:03:49.831 CC lib/ftl/utils/ftl_md.o 00:03:49.831 CC lib/ftl/utils/ftl_mempool.o 00:03:49.831 CC lib/ftl/utils/ftl_bitmap.o 00:03:49.831 CC lib/ftl/utils/ftl_property.o 00:03:49.831 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:49.831 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:49.831 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:49.831 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:49.831 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:49.831 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:49.831 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:49.831 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:49.831 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:49.831 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:49.831 CC lib/ftl/base/ftl_base_dev.o 00:03:50.090 CC lib/ftl/base/ftl_base_bdev.o 00:03:50.090 CC lib/ftl/ftl_trace.o 00:03:50.347 LIB libspdk_nbd.a 00:03:50.347 SO libspdk_nbd.so.6.0 00:03:50.347 LIB libspdk_scsi.a 00:03:50.347 SYMLINK libspdk_nbd.so 00:03:50.347 SO libspdk_scsi.so.8.0 00:03:50.347 LIB libspdk_ublk.a 00:03:50.347 SYMLINK libspdk_scsi.so 00:03:50.604 SO libspdk_ublk.so.2.0 00:03:50.604 SYMLINK libspdk_ublk.so 00:03:50.604 CC lib/vhost/vhost.o 00:03:50.604 CC lib/iscsi/conn.o 00:03:50.604 CC lib/vhost/vhost_rpc.o 00:03:50.604 CC lib/iscsi/init_grp.o 00:03:50.604 CC lib/vhost/vhost_scsi.o 00:03:50.604 CC lib/iscsi/iscsi.o 00:03:50.604 CC lib/iscsi/md5.o 00:03:50.604 CC lib/vhost/vhost_blk.o 00:03:50.604 CC lib/iscsi/param.o 00:03:50.604 CC lib/vhost/rte_vhost_user.o 00:03:50.604 CC lib/iscsi/portal_grp.o 00:03:50.604 CC lib/iscsi/tgt_node.o 00:03:50.604 CC lib/iscsi/iscsi_subsystem.o 00:03:50.604 CC lib/iscsi/iscsi_rpc.o 00:03:50.604 CC lib/iscsi/task.o 00:03:50.863 LIB libspdk_ftl.a 00:03:50.863 SO libspdk_ftl.so.8.0 00:03:51.429 SYMLINK libspdk_ftl.so 00:03:51.687 LIB libspdk_vhost.a 00:03:51.687 SO libspdk_vhost.so.7.1 00:03:51.945 SYMLINK libspdk_vhost.so 00:03:51.945 LIB libspdk_nvmf.a 00:03:51.945 LIB libspdk_iscsi.a 00:03:51.945 SO libspdk_nvmf.so.17.0 00:03:51.945 SO libspdk_iscsi.so.7.0 00:03:52.203 SYMLINK libspdk_nvmf.so 00:03:52.203 SYMLINK libspdk_iscsi.so 00:03:52.461 CC module/vfu_device/vfu_virtio.o 00:03:52.461 CC module/vfu_device/vfu_virtio_blk.o 00:03:52.461 CC module/vfu_device/vfu_virtio_scsi.o 00:03:52.461 CC module/vfu_device/vfu_virtio_rpc.o 00:03:52.461 CC module/env_dpdk/env_dpdk_rpc.o 00:03:52.461 CC module/blob/bdev/blob_bdev.o 00:03:52.461 CC module/accel/ioat/accel_ioat.o 00:03:52.461 CC module/accel/ioat/accel_ioat_rpc.o 00:03:52.461 CC module/accel/error/accel_error.o 00:03:52.461 CC module/accel/error/accel_error_rpc.o 00:03:52.461 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:52.461 CC module/accel/iaa/accel_iaa.o 00:03:52.461 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:52.461 CC module/scheduler/gscheduler/gscheduler.o 00:03:52.461 CC module/accel/iaa/accel_iaa_rpc.o 00:03:52.461 CC module/accel/dsa/accel_dsa.o 00:03:52.461 CC module/sock/posix/posix.o 00:03:52.461 CC module/accel/dsa/accel_dsa_rpc.o 00:03:52.461 LIB libspdk_env_dpdk_rpc.a 00:03:52.461 SO libspdk_env_dpdk_rpc.so.5.0 00:03:52.461 LIB libspdk_scheduler_gscheduler.a 00:03:52.461 SYMLINK libspdk_env_dpdk_rpc.so 00:03:52.461 LIB libspdk_scheduler_dpdk_governor.a 00:03:52.720 SO libspdk_scheduler_gscheduler.so.3.0 00:03:52.720 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:52.720 LIB libspdk_accel_error.a 00:03:52.720 LIB libspdk_accel_ioat.a 00:03:52.720 LIB libspdk_scheduler_dynamic.a 00:03:52.720 LIB libspdk_accel_iaa.a 00:03:52.720 SO libspdk_accel_error.so.1.0 00:03:52.720 SO libspdk_accel_ioat.so.5.0 00:03:52.720 SYMLINK libspdk_scheduler_gscheduler.so 00:03:52.720 SO libspdk_scheduler_dynamic.so.3.0 00:03:52.720 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:52.720 SO libspdk_accel_iaa.so.2.0 00:03:52.720 LIB libspdk_accel_dsa.a 00:03:52.720 SYMLINK libspdk_accel_ioat.so 00:03:52.720 SYMLINK libspdk_accel_error.so 00:03:52.720 LIB libspdk_blob_bdev.a 00:03:52.720 SYMLINK libspdk_scheduler_dynamic.so 00:03:52.720 SO libspdk_accel_dsa.so.4.0 00:03:52.720 SYMLINK libspdk_accel_iaa.so 00:03:52.720 SO libspdk_blob_bdev.so.10.1 00:03:52.720 SYMLINK libspdk_accel_dsa.so 00:03:52.720 SYMLINK libspdk_blob_bdev.so 00:03:52.981 CC module/blobfs/bdev/blobfs_bdev.o 00:03:52.981 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:52.981 CC module/bdev/split/vbdev_split.o 00:03:52.981 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:52.981 CC module/bdev/gpt/gpt.o 00:03:52.981 CC module/bdev/split/vbdev_split_rpc.o 00:03:52.981 CC module/bdev/gpt/vbdev_gpt.o 00:03:52.981 CC module/bdev/error/vbdev_error.o 00:03:52.981 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:52.981 CC module/bdev/lvol/vbdev_lvol.o 00:03:52.981 CC module/bdev/delay/vbdev_delay.o 00:03:52.981 CC module/bdev/error/vbdev_error_rpc.o 00:03:52.981 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:52.981 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:52.981 CC module/bdev/passthru/vbdev_passthru.o 00:03:52.981 CC module/bdev/null/bdev_null.o 00:03:52.981 CC module/bdev/malloc/bdev_malloc.o 00:03:52.981 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:52.981 CC module/bdev/null/bdev_null_rpc.o 00:03:52.981 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:52.981 CC module/bdev/ftl/bdev_ftl.o 00:03:52.981 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:52.981 CC module/bdev/raid/bdev_raid.o 00:03:52.981 CC module/bdev/raid/bdev_raid_rpc.o 00:03:52.981 CC module/bdev/raid/bdev_raid_sb.o 00:03:52.981 CC module/bdev/raid/raid0.o 00:03:52.981 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:52.981 CC module/bdev/iscsi/bdev_iscsi.o 00:03:52.981 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:52.981 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:52.981 CC module/bdev/nvme/bdev_nvme.o 00:03:52.981 CC module/bdev/raid/raid1.o 00:03:52.981 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:52.981 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:52.981 CC module/bdev/nvme/nvme_rpc.o 00:03:52.981 CC module/bdev/raid/concat.o 00:03:52.981 CC module/bdev/nvme/bdev_mdns_client.o 00:03:52.981 CC module/bdev/nvme/vbdev_opal.o 00:03:52.981 CC module/bdev/aio/bdev_aio.o 00:03:52.981 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:52.981 CC module/bdev/aio/bdev_aio_rpc.o 00:03:52.981 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:52.981 LIB libspdk_vfu_device.a 00:03:52.981 SO libspdk_vfu_device.so.2.0 00:03:53.276 SYMLINK libspdk_vfu_device.so 00:03:53.276 LIB libspdk_sock_posix.a 00:03:53.276 LIB libspdk_blobfs_bdev.a 00:03:53.276 SO libspdk_sock_posix.so.5.0 00:03:53.276 LIB libspdk_bdev_gpt.a 00:03:53.276 LIB libspdk_bdev_passthru.a 00:03:53.276 SO libspdk_bdev_gpt.so.5.0 00:03:53.276 SO libspdk_blobfs_bdev.so.5.0 00:03:53.276 SO libspdk_bdev_passthru.so.5.0 00:03:53.276 LIB libspdk_bdev_split.a 00:03:53.536 SYMLINK libspdk_sock_posix.so 00:03:53.536 SYMLINK libspdk_bdev_gpt.so 00:03:53.536 SYMLINK libspdk_blobfs_bdev.so 00:03:53.536 SO libspdk_bdev_split.so.5.0 00:03:53.536 LIB libspdk_bdev_error.a 00:03:53.536 SYMLINK libspdk_bdev_passthru.so 00:03:53.536 LIB libspdk_bdev_ftl.a 00:03:53.536 LIB libspdk_bdev_null.a 00:03:53.536 SO libspdk_bdev_error.so.5.0 00:03:53.536 SYMLINK libspdk_bdev_split.so 00:03:53.536 SO libspdk_bdev_null.so.5.0 00:03:53.536 SO libspdk_bdev_ftl.so.5.0 00:03:53.536 LIB libspdk_bdev_zone_block.a 00:03:53.536 LIB libspdk_bdev_malloc.a 00:03:53.536 SYMLINK libspdk_bdev_error.so 00:03:53.536 SO libspdk_bdev_zone_block.so.5.0 00:03:53.536 SYMLINK libspdk_bdev_null.so 00:03:53.536 SYMLINK libspdk_bdev_ftl.so 00:03:53.536 LIB libspdk_bdev_aio.a 00:03:53.536 SO libspdk_bdev_malloc.so.5.0 00:03:53.536 LIB libspdk_bdev_delay.a 00:03:53.536 SO libspdk_bdev_aio.so.5.0 00:03:53.536 LIB libspdk_bdev_iscsi.a 00:03:53.536 SO libspdk_bdev_delay.so.5.0 00:03:53.536 SYMLINK libspdk_bdev_zone_block.so 00:03:53.536 SO libspdk_bdev_iscsi.so.5.0 00:03:53.536 SYMLINK libspdk_bdev_malloc.so 00:03:53.536 SYMLINK libspdk_bdev_aio.so 00:03:53.536 SYMLINK libspdk_bdev_delay.so 00:03:53.536 SYMLINK libspdk_bdev_iscsi.so 00:03:53.536 LIB libspdk_bdev_virtio.a 00:03:53.793 SO libspdk_bdev_virtio.so.5.0 00:03:53.793 LIB libspdk_bdev_lvol.a 00:03:53.793 SO libspdk_bdev_lvol.so.5.0 00:03:53.793 SYMLINK libspdk_bdev_virtio.so 00:03:53.793 SYMLINK libspdk_bdev_lvol.so 00:03:54.050 LIB libspdk_bdev_raid.a 00:03:54.050 SO libspdk_bdev_raid.so.5.0 00:03:54.308 SYMLINK libspdk_bdev_raid.so 00:03:55.241 LIB libspdk_bdev_nvme.a 00:03:55.241 SO libspdk_bdev_nvme.so.6.0 00:03:55.241 SYMLINK libspdk_bdev_nvme.so 00:03:55.498 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:55.498 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:55.498 CC module/event/subsystems/iobuf/iobuf.o 00:03:55.498 CC module/event/subsystems/scheduler/scheduler.o 00:03:55.498 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:55.498 CC module/event/subsystems/vmd/vmd.o 00:03:55.498 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:55.498 CC module/event/subsystems/sock/sock.o 00:03:55.757 LIB libspdk_event_sock.a 00:03:55.757 LIB libspdk_event_vhost_blk.a 00:03:55.757 LIB libspdk_event_scheduler.a 00:03:55.757 LIB libspdk_event_vmd.a 00:03:55.757 LIB libspdk_event_vfu_tgt.a 00:03:55.757 SO libspdk_event_sock.so.4.0 00:03:55.757 LIB libspdk_event_iobuf.a 00:03:55.757 SO libspdk_event_scheduler.so.3.0 00:03:55.757 SO libspdk_event_vhost_blk.so.2.0 00:03:55.757 SO libspdk_event_vfu_tgt.so.2.0 00:03:55.757 SO libspdk_event_vmd.so.5.0 00:03:55.757 SO libspdk_event_iobuf.so.2.0 00:03:55.757 SYMLINK libspdk_event_sock.so 00:03:55.757 SYMLINK libspdk_event_vhost_blk.so 00:03:55.757 SYMLINK libspdk_event_scheduler.so 00:03:55.757 SYMLINK libspdk_event_vfu_tgt.so 00:03:55.757 SYMLINK libspdk_event_vmd.so 00:03:55.757 SYMLINK libspdk_event_iobuf.so 00:03:56.015 CC module/event/subsystems/accel/accel.o 00:03:56.015 LIB libspdk_event_accel.a 00:03:56.015 SO libspdk_event_accel.so.5.0 00:03:56.015 SYMLINK libspdk_event_accel.so 00:03:56.273 CC module/event/subsystems/bdev/bdev.o 00:03:56.532 LIB libspdk_event_bdev.a 00:03:56.532 SO libspdk_event_bdev.so.5.0 00:03:56.532 SYMLINK libspdk_event_bdev.so 00:03:56.532 CC module/event/subsystems/scsi/scsi.o 00:03:56.532 CC module/event/subsystems/nbd/nbd.o 00:03:56.532 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:56.532 CC module/event/subsystems/ublk/ublk.o 00:03:56.532 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:56.790 LIB libspdk_event_ublk.a 00:03:56.790 LIB libspdk_event_nbd.a 00:03:56.790 LIB libspdk_event_scsi.a 00:03:56.790 SO libspdk_event_ublk.so.2.0 00:03:56.790 SO libspdk_event_nbd.so.5.0 00:03:56.790 SO libspdk_event_scsi.so.5.0 00:03:56.790 SYMLINK libspdk_event_ublk.so 00:03:56.790 SYMLINK libspdk_event_nbd.so 00:03:56.790 SYMLINK libspdk_event_scsi.so 00:03:56.790 LIB libspdk_event_nvmf.a 00:03:56.790 SO libspdk_event_nvmf.so.5.0 00:03:57.047 SYMLINK libspdk_event_nvmf.so 00:03:57.047 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:57.047 CC module/event/subsystems/iscsi/iscsi.o 00:03:57.047 LIB libspdk_event_vhost_scsi.a 00:03:57.047 LIB libspdk_event_iscsi.a 00:03:57.047 SO libspdk_event_vhost_scsi.so.2.0 00:03:57.305 SO libspdk_event_iscsi.so.5.0 00:03:57.305 SYMLINK libspdk_event_vhost_scsi.so 00:03:57.305 SYMLINK libspdk_event_iscsi.so 00:03:57.305 SO libspdk.so.5.0 00:03:57.305 SYMLINK libspdk.so 00:03:57.567 CC app/trace_record/trace_record.o 00:03:57.567 CC test/rpc_client/rpc_client_test.o 00:03:57.567 TEST_HEADER include/spdk/accel.h 00:03:57.567 CC app/spdk_top/spdk_top.o 00:03:57.567 TEST_HEADER include/spdk/accel_module.h 00:03:57.567 CXX app/trace/trace.o 00:03:57.567 TEST_HEADER include/spdk/assert.h 00:03:57.567 CC app/spdk_nvme_identify/identify.o 00:03:57.567 CC app/spdk_lspci/spdk_lspci.o 00:03:57.567 CC app/spdk_nvme_perf/perf.o 00:03:57.567 TEST_HEADER include/spdk/barrier.h 00:03:57.567 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.567 TEST_HEADER include/spdk/base64.h 00:03:57.567 TEST_HEADER include/spdk/bdev.h 00:03:57.567 TEST_HEADER include/spdk/bdev_module.h 00:03:57.567 TEST_HEADER include/spdk/bdev_zone.h 00:03:57.567 TEST_HEADER include/spdk/bit_array.h 00:03:57.567 TEST_HEADER include/spdk/bit_pool.h 00:03:57.567 TEST_HEADER include/spdk/blob_bdev.h 00:03:57.567 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:57.567 TEST_HEADER include/spdk/blobfs.h 00:03:57.567 TEST_HEADER include/spdk/blob.h 00:03:57.567 TEST_HEADER include/spdk/conf.h 00:03:57.567 TEST_HEADER include/spdk/config.h 00:03:57.567 TEST_HEADER include/spdk/cpuset.h 00:03:57.567 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:57.567 TEST_HEADER include/spdk/crc16.h 00:03:57.567 TEST_HEADER include/spdk/crc32.h 00:03:57.567 TEST_HEADER include/spdk/crc64.h 00:03:57.567 TEST_HEADER include/spdk/dif.h 00:03:57.567 CC app/spdk_dd/spdk_dd.o 00:03:57.567 TEST_HEADER include/spdk/dma.h 00:03:57.567 TEST_HEADER include/spdk/endian.h 00:03:57.567 TEST_HEADER include/spdk/env_dpdk.h 00:03:57.567 CC examples/ioat/perf/perf.o 00:03:57.567 CC app/nvmf_tgt/nvmf_main.o 00:03:57.567 CC examples/idxd/perf/perf.o 00:03:57.567 TEST_HEADER include/spdk/env.h 00:03:57.567 CC examples/accel/perf/accel_perf.o 00:03:57.567 CC app/iscsi_tgt/iscsi_tgt.o 00:03:57.567 TEST_HEADER include/spdk/event.h 00:03:57.567 CC test/event/event_perf/event_perf.o 00:03:57.567 TEST_HEADER include/spdk/fd_group.h 00:03:57.567 CC test/event/reactor_perf/reactor_perf.o 00:03:57.567 CC examples/nvme/hello_world/hello_world.o 00:03:57.567 CC examples/ioat/verify/verify.o 00:03:57.567 TEST_HEADER include/spdk/fd.h 00:03:57.567 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:57.567 CC test/event/reactor/reactor.o 00:03:57.567 TEST_HEADER include/spdk/file.h 00:03:57.567 CC examples/nvme/reconnect/reconnect.o 00:03:57.567 CC examples/nvme/arbitration/arbitration.o 00:03:57.567 CC test/nvme/aer/aer.o 00:03:57.567 CC examples/util/zipf/zipf.o 00:03:57.567 CC test/thread/poller_perf/poller_perf.o 00:03:57.567 CC app/vhost/vhost.o 00:03:57.567 TEST_HEADER include/spdk/ftl.h 00:03:57.567 CC examples/nvme/hotplug/hotplug.o 00:03:57.567 TEST_HEADER include/spdk/gpt_spec.h 00:03:57.567 CC examples/sock/hello_world/hello_sock.o 00:03:57.567 CC examples/vmd/lsvmd/lsvmd.o 00:03:57.567 TEST_HEADER include/spdk/hexlify.h 00:03:57.567 TEST_HEADER include/spdk/histogram_data.h 00:03:57.567 CC app/fio/nvme/fio_plugin.o 00:03:57.567 TEST_HEADER include/spdk/idxd.h 00:03:57.567 CC test/event/app_repeat/app_repeat.o 00:03:57.567 CC app/spdk_tgt/spdk_tgt.o 00:03:57.567 TEST_HEADER include/spdk/idxd_spec.h 00:03:57.567 TEST_HEADER include/spdk/init.h 00:03:57.567 TEST_HEADER include/spdk/ioat.h 00:03:57.567 TEST_HEADER include/spdk/ioat_spec.h 00:03:57.567 TEST_HEADER include/spdk/iscsi_spec.h 00:03:57.567 CC examples/blob/cli/blobcli.o 00:03:57.567 TEST_HEADER include/spdk/json.h 00:03:57.567 TEST_HEADER include/spdk/jsonrpc.h 00:03:57.567 CC test/accel/dif/dif.o 00:03:57.567 CC examples/nvmf/nvmf/nvmf.o 00:03:57.567 TEST_HEADER include/spdk/likely.h 00:03:57.567 CC examples/bdev/hello_world/hello_bdev.o 00:03:57.567 TEST_HEADER include/spdk/log.h 00:03:57.567 CC examples/thread/thread/thread_ex.o 00:03:57.567 CC test/blobfs/mkfs/mkfs.o 00:03:57.567 TEST_HEADER include/spdk/lvol.h 00:03:57.567 CC examples/bdev/bdevperf/bdevperf.o 00:03:57.567 CC examples/blob/hello_world/hello_blob.o 00:03:57.567 TEST_HEADER include/spdk/memory.h 00:03:57.567 TEST_HEADER include/spdk/mmio.h 00:03:57.567 TEST_HEADER include/spdk/nbd.h 00:03:57.567 CC test/dma/test_dma/test_dma.o 00:03:57.567 TEST_HEADER include/spdk/notify.h 00:03:57.567 CC test/app/bdev_svc/bdev_svc.o 00:03:57.567 CC test/env/mem_callbacks/mem_callbacks.o 00:03:57.567 CC test/event/scheduler/scheduler.o 00:03:57.567 TEST_HEADER include/spdk/nvme.h 00:03:57.567 TEST_HEADER include/spdk/nvme_intel.h 00:03:57.567 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:57.567 CC test/bdev/bdevio/bdevio.o 00:03:57.567 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:57.567 TEST_HEADER include/spdk/nvme_spec.h 00:03:57.567 TEST_HEADER include/spdk/nvme_zns.h 00:03:57.567 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:57.567 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:57.567 CC test/lvol/esnap/esnap.o 00:03:57.567 TEST_HEADER include/spdk/nvmf.h 00:03:57.567 TEST_HEADER include/spdk/nvmf_spec.h 00:03:57.567 TEST_HEADER include/spdk/nvmf_transport.h 00:03:57.830 TEST_HEADER include/spdk/opal.h 00:03:57.830 TEST_HEADER include/spdk/opal_spec.h 00:03:57.830 TEST_HEADER include/spdk/pci_ids.h 00:03:57.830 TEST_HEADER include/spdk/pipe.h 00:03:57.830 TEST_HEADER include/spdk/queue.h 00:03:57.830 TEST_HEADER include/spdk/reduce.h 00:03:57.830 TEST_HEADER include/spdk/rpc.h 00:03:57.830 TEST_HEADER include/spdk/scheduler.h 00:03:57.830 TEST_HEADER include/spdk/scsi.h 00:03:57.830 TEST_HEADER include/spdk/scsi_spec.h 00:03:57.830 TEST_HEADER include/spdk/sock.h 00:03:57.830 TEST_HEADER include/spdk/stdinc.h 00:03:57.830 TEST_HEADER include/spdk/string.h 00:03:57.830 TEST_HEADER include/spdk/thread.h 00:03:57.830 TEST_HEADER include/spdk/trace.h 00:03:57.830 TEST_HEADER include/spdk/trace_parser.h 00:03:57.830 TEST_HEADER include/spdk/tree.h 00:03:57.830 TEST_HEADER include/spdk/ublk.h 00:03:57.830 TEST_HEADER include/spdk/util.h 00:03:57.830 TEST_HEADER include/spdk/uuid.h 00:03:57.830 TEST_HEADER include/spdk/version.h 00:03:57.830 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:57.830 LINK spdk_lspci 00:03:57.830 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:57.830 TEST_HEADER include/spdk/vhost.h 00:03:57.830 TEST_HEADER include/spdk/vmd.h 00:03:57.830 TEST_HEADER include/spdk/xor.h 00:03:57.830 TEST_HEADER include/spdk/zipf.h 00:03:57.830 CXX test/cpp_headers/accel.o 00:03:57.830 LINK rpc_client_test 00:03:57.830 LINK reactor_perf 00:03:57.830 LINK lsvmd 00:03:57.830 LINK reactor 00:03:57.830 LINK event_perf 00:03:57.830 LINK poller_perf 00:03:57.830 LINK spdk_nvme_discover 00:03:57.830 LINK zipf 00:03:57.830 LINK interrupt_tgt 00:03:57.830 LINK app_repeat 00:03:57.830 LINK nvmf_tgt 00:03:57.830 LINK vhost 00:03:58.095 LINK spdk_trace_record 00:03:58.095 LINK ioat_perf 00:03:58.095 LINK iscsi_tgt 00:03:58.095 LINK verify 00:03:58.095 LINK spdk_tgt 00:03:58.095 LINK hello_world 00:03:58.095 LINK bdev_svc 00:03:58.095 LINK hotplug 00:03:58.095 LINK mkfs 00:03:58.095 LINK hello_sock 00:03:58.095 LINK scheduler 00:03:58.095 LINK hello_bdev 00:03:58.095 LINK aer 00:03:58.095 LINK hello_blob 00:03:58.095 LINK thread 00:03:58.095 CXX test/cpp_headers/accel_module.o 00:03:58.095 LINK idxd_perf 00:03:58.095 CC test/env/vtophys/vtophys.o 00:03:58.095 LINK nvmf 00:03:58.095 LINK arbitration 00:03:58.362 LINK reconnect 00:03:58.362 LINK spdk_dd 00:03:58.362 CC examples/vmd/led/led.o 00:03:58.362 CC test/nvme/reset/reset.o 00:03:58.362 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:58.362 CXX test/cpp_headers/assert.o 00:03:58.362 LINK spdk_trace 00:03:58.362 CXX test/cpp_headers/barrier.o 00:03:58.362 CC examples/nvme/abort/abort.o 00:03:58.362 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:58.362 CC test/env/memory/memory_ut.o 00:03:58.362 LINK dif 00:03:58.362 CC app/fio/bdev/fio_plugin.o 00:03:58.362 LINK test_dma 00:03:58.362 CC test/env/pci/pci_ut.o 00:03:58.362 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:58.362 CXX test/cpp_headers/base64.o 00:03:58.362 CXX test/cpp_headers/bdev.o 00:03:58.362 CC test/app/histogram_perf/histogram_perf.o 00:03:58.362 LINK bdevio 00:03:58.362 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:58.362 CXX test/cpp_headers/bdev_module.o 00:03:58.362 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:58.362 CC test/nvme/sgl/sgl.o 00:03:58.362 CC test/app/jsoncat/jsoncat.o 00:03:58.362 LINK accel_perf 00:03:58.626 LINK vtophys 00:03:58.626 LINK nvme_manage 00:03:58.626 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:58.626 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:58.626 CXX test/cpp_headers/bdev_zone.o 00:03:58.626 CC test/app/stub/stub.o 00:03:58.626 LINK blobcli 00:03:58.626 CC test/nvme/overhead/overhead.o 00:03:58.626 LINK led 00:03:58.626 CC test/nvme/e2edp/nvme_dp.o 00:03:58.626 CC test/nvme/err_injection/err_injection.o 00:03:58.626 CC test/nvme/startup/startup.o 00:03:58.626 CC test/nvme/reserve/reserve.o 00:03:58.626 LINK cmb_copy 00:03:58.626 CC test/nvme/connect_stress/connect_stress.o 00:03:58.626 CC test/nvme/boot_partition/boot_partition.o 00:03:58.626 CXX test/cpp_headers/bit_array.o 00:03:58.626 LINK spdk_nvme 00:03:58.626 CC test/nvme/simple_copy/simple_copy.o 00:03:58.626 CXX test/cpp_headers/bit_pool.o 00:03:58.626 CXX test/cpp_headers/blob_bdev.o 00:03:58.626 LINK env_dpdk_post_init 00:03:58.626 LINK histogram_perf 00:03:58.626 CC test/nvme/compliance/nvme_compliance.o 00:03:58.626 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.626 CC test/nvme/fused_ordering/fused_ordering.o 00:03:58.893 LINK jsoncat 00:03:58.893 CXX test/cpp_headers/blobfs.o 00:03:58.893 CXX test/cpp_headers/blob.o 00:03:58.893 CXX test/cpp_headers/conf.o 00:03:58.893 CXX test/cpp_headers/config.o 00:03:58.893 LINK pmr_persistence 00:03:58.893 LINK reset 00:03:58.893 CXX test/cpp_headers/cpuset.o 00:03:58.893 LINK mem_callbacks 00:03:58.893 CXX test/cpp_headers/crc16.o 00:03:58.893 CXX test/cpp_headers/crc32.o 00:03:58.893 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:58.893 CXX test/cpp_headers/crc64.o 00:03:58.893 CC test/nvme/fdp/fdp.o 00:03:58.893 CXX test/cpp_headers/dif.o 00:03:58.893 CC test/nvme/cuse/cuse.o 00:03:58.893 LINK stub 00:03:58.893 LINK startup 00:03:58.893 CXX test/cpp_headers/dma.o 00:03:58.893 CXX test/cpp_headers/endian.o 00:03:58.894 LINK spdk_nvme_perf 00:03:58.894 CXX test/cpp_headers/env_dpdk.o 00:03:58.894 LINK boot_partition 00:03:58.894 LINK err_injection 00:03:58.894 CXX test/cpp_headers/env.o 00:03:58.894 CXX test/cpp_headers/event.o 00:03:58.894 LINK sgl 00:03:58.894 LINK reserve 00:03:58.894 CXX test/cpp_headers/fd_group.o 00:03:58.894 LINK connect_stress 00:03:58.894 LINK abort 00:03:59.161 CXX test/cpp_headers/fd.o 00:03:59.161 LINK spdk_nvme_identify 00:03:59.161 LINK bdevperf 00:03:59.161 CXX test/cpp_headers/file.o 00:03:59.161 LINK spdk_top 00:03:59.161 LINK pci_ut 00:03:59.161 LINK simple_copy 00:03:59.161 CXX test/cpp_headers/ftl.o 00:03:59.161 CXX test/cpp_headers/gpt_spec.o 00:03:59.161 LINK nvme_dp 00:03:59.161 CXX test/cpp_headers/hexlify.o 00:03:59.161 CXX test/cpp_headers/histogram_data.o 00:03:59.161 CXX test/cpp_headers/idxd.o 00:03:59.161 CXX test/cpp_headers/idxd_spec.o 00:03:59.161 CXX test/cpp_headers/init.o 00:03:59.161 LINK overhead 00:03:59.161 LINK nvme_fuzz 00:03:59.161 CXX test/cpp_headers/ioat.o 00:03:59.161 CXX test/cpp_headers/ioat_spec.o 00:03:59.161 LINK fused_ordering 00:03:59.161 CXX test/cpp_headers/iscsi_spec.o 00:03:59.161 CXX test/cpp_headers/json.o 00:03:59.161 CXX test/cpp_headers/jsonrpc.o 00:03:59.161 CXX test/cpp_headers/likely.o 00:03:59.161 LINK doorbell_aers 00:03:59.161 CXX test/cpp_headers/log.o 00:03:59.161 CXX test/cpp_headers/lvol.o 00:03:59.425 CXX test/cpp_headers/memory.o 00:03:59.425 LINK spdk_bdev 00:03:59.425 CXX test/cpp_headers/mmio.o 00:03:59.425 CXX test/cpp_headers/nbd.o 00:03:59.425 CXX test/cpp_headers/notify.o 00:03:59.425 CXX test/cpp_headers/nvme.o 00:03:59.425 LINK vhost_fuzz 00:03:59.425 CXX test/cpp_headers/nvme_intel.o 00:03:59.425 CXX test/cpp_headers/nvme_ocssd.o 00:03:59.425 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:59.425 CXX test/cpp_headers/nvme_spec.o 00:03:59.425 CXX test/cpp_headers/nvme_zns.o 00:03:59.425 CXX test/cpp_headers/nvmf_cmd.o 00:03:59.425 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:59.425 CXX test/cpp_headers/nvmf.o 00:03:59.425 LINK nvme_compliance 00:03:59.425 CXX test/cpp_headers/nvmf_spec.o 00:03:59.425 CXX test/cpp_headers/nvmf_transport.o 00:03:59.425 CXX test/cpp_headers/opal.o 00:03:59.425 CXX test/cpp_headers/opal_spec.o 00:03:59.425 CXX test/cpp_headers/pci_ids.o 00:03:59.425 CXX test/cpp_headers/pipe.o 00:03:59.425 CXX test/cpp_headers/queue.o 00:03:59.425 CXX test/cpp_headers/reduce.o 00:03:59.425 CXX test/cpp_headers/rpc.o 00:03:59.425 CXX test/cpp_headers/scheduler.o 00:03:59.425 CXX test/cpp_headers/scsi.o 00:03:59.425 CXX test/cpp_headers/scsi_spec.o 00:03:59.425 CXX test/cpp_headers/sock.o 00:03:59.425 CXX test/cpp_headers/stdinc.o 00:03:59.425 CXX test/cpp_headers/string.o 00:03:59.425 CXX test/cpp_headers/thread.o 00:03:59.425 CXX test/cpp_headers/trace.o 00:03:59.425 LINK fdp 00:03:59.425 CXX test/cpp_headers/trace_parser.o 00:03:59.425 CXX test/cpp_headers/tree.o 00:03:59.425 CXX test/cpp_headers/ublk.o 00:03:59.425 CXX test/cpp_headers/util.o 00:03:59.425 CXX test/cpp_headers/uuid.o 00:03:59.425 CXX test/cpp_headers/version.o 00:03:59.683 CXX test/cpp_headers/vfio_user_pci.o 00:03:59.683 CXX test/cpp_headers/vfio_user_spec.o 00:03:59.683 CXX test/cpp_headers/vhost.o 00:03:59.683 CXX test/cpp_headers/vmd.o 00:03:59.683 CXX test/cpp_headers/xor.o 00:03:59.683 CXX test/cpp_headers/zipf.o 00:03:59.941 LINK memory_ut 00:04:00.506 LINK cuse 00:04:00.506 LINK iscsi_fuzz 00:04:03.037 LINK esnap 00:04:03.296 00:04:03.296 real 0m38.190s 00:04:03.296 user 7m16.719s 00:04:03.296 sys 1m36.966s 00:04:03.296 01:25:16 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:03.296 01:25:16 -- common/autotest_common.sh@10 -- $ set +x 00:04:03.296 ************************************ 00:04:03.296 END TEST make 00:04:03.296 ************************************ 00:04:03.296 01:25:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:03.296 01:25:16 -- nvmf/common.sh@7 -- # uname -s 00:04:03.296 01:25:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:03.296 01:25:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:03.296 01:25:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:03.296 01:25:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:03.296 01:25:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:03.296 01:25:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:03.296 01:25:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:03.296 01:25:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:03.296 01:25:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:03.296 01:25:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:03.296 01:25:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:03.296 01:25:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:03.296 01:25:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:03.296 01:25:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:03.296 01:25:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:03.296 01:25:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:03.296 01:25:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:03.296 01:25:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:03.296 01:25:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:03.296 01:25:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 01:25:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 01:25:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 01:25:16 -- paths/export.sh@5 -- # export PATH 00:04:03.297 01:25:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:03.297 01:25:16 -- nvmf/common.sh@46 -- # : 0 00:04:03.297 01:25:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:03.297 01:25:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:03.297 01:25:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:03.297 01:25:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:03.297 01:25:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:03.297 01:25:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:03.297 01:25:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:03.297 01:25:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:03.297 01:25:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:03.297 01:25:16 -- spdk/autotest.sh@32 -- # uname -s 00:04:03.297 01:25:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:03.297 01:25:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:03.297 01:25:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.297 01:25:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:03.297 01:25:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:03.297 01:25:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:03.297 01:25:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:03.297 01:25:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:03.297 01:25:16 -- spdk/autotest.sh@48 -- # udevadm_pid=3624596 00:04:03.297 01:25:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:03.297 01:25:16 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:03.297 01:25:16 -- spdk/autotest.sh@54 -- # echo 3624598 00:04:03.297 01:25:16 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:03.297 01:25:16 -- spdk/autotest.sh@56 -- # echo 3624599 00:04:03.297 01:25:16 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:04:03.297 01:25:16 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:03.297 01:25:16 -- spdk/autotest.sh@60 -- # echo 3624600 00:04:03.297 01:25:16 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:03.297 01:25:16 -- spdk/autotest.sh@62 -- # echo 3624601 00:04:03.297 01:25:16 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:03.297 01:25:16 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:03.297 01:25:16 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:03.297 01:25:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:03.297 01:25:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.297 01:25:16 -- spdk/autotest.sh@70 -- # create_test_list 00:04:03.297 01:25:16 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:03.297 01:25:16 -- common/autotest_common.sh@10 -- # set +x 00:04:03.297 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:04:03.556 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:04:03.556 01:25:16 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:03.556 01:25:16 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.556 01:25:16 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.556 01:25:16 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:03.556 01:25:16 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.556 01:25:16 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:03.556 01:25:16 -- common/autotest_common.sh@1440 -- # uname 00:04:03.556 01:25:16 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:03.556 01:25:16 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:03.556 01:25:16 -- common/autotest_common.sh@1460 -- # uname 00:04:03.556 01:25:16 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:03.556 01:25:16 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:03.556 01:25:16 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:03.556 01:25:16 -- spdk/autotest.sh@83 -- # hash lcov 00:04:03.556 01:25:16 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:03.556 01:25:16 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:03.556 --rc lcov_branch_coverage=1 00:04:03.556 --rc lcov_function_coverage=1 00:04:03.556 --rc genhtml_branch_coverage=1 00:04:03.556 --rc genhtml_function_coverage=1 00:04:03.556 --rc genhtml_legend=1 00:04:03.556 --rc geninfo_all_blocks=1 00:04:03.556 ' 00:04:03.556 01:25:16 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:03.556 --rc lcov_branch_coverage=1 00:04:03.556 --rc lcov_function_coverage=1 00:04:03.556 --rc genhtml_branch_coverage=1 00:04:03.556 --rc genhtml_function_coverage=1 00:04:03.556 --rc genhtml_legend=1 00:04:03.556 --rc geninfo_all_blocks=1 00:04:03.556 ' 00:04:03.556 01:25:16 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:03.556 --rc lcov_branch_coverage=1 00:04:03.556 --rc lcov_function_coverage=1 00:04:03.556 --rc genhtml_branch_coverage=1 00:04:03.556 --rc genhtml_function_coverage=1 00:04:03.556 --rc genhtml_legend=1 00:04:03.556 --rc geninfo_all_blocks=1 00:04:03.556 --no-external' 00:04:03.556 01:25:16 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:03.556 --rc lcov_branch_coverage=1 00:04:03.556 --rc lcov_function_coverage=1 00:04:03.556 --rc genhtml_branch_coverage=1 00:04:03.556 --rc genhtml_function_coverage=1 00:04:03.556 --rc genhtml_legend=1 00:04:03.556 --rc geninfo_all_blocks=1 00:04:03.556 --no-external' 00:04:03.556 01:25:16 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:03.556 lcov: LCOV version 1.14 00:04:03.556 01:25:16 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:06.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:06.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:06.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:06.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:06.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:06.848 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:33.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:33.401 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:33.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:33.402 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:35.930 01:25:48 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:35.930 01:25:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:35.930 01:25:48 -- common/autotest_common.sh@10 -- # set +x 00:04:35.930 01:25:48 -- spdk/autotest.sh@102 -- # rm -f 00:04:35.930 01:25:48 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:36.498 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:36.498 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:36.498 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:36.757 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:36.757 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:36.757 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:36.757 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:36.757 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:36.757 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:36.757 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:36.757 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:36.757 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:36.757 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:36.757 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:36.757 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:36.757 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:36.757 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:36.757 01:25:49 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:36.757 01:25:49 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:36.757 01:25:49 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:36.757 01:25:49 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:36.757 01:25:49 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:36.757 01:25:49 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:36.757 01:25:49 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:36.757 01:25:49 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.757 01:25:49 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:36.757 01:25:49 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:36.757 01:25:49 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:36.757 01:25:49 -- spdk/autotest.sh@121 -- # grep -v p 00:04:36.757 01:25:49 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:37.016 01:25:49 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:37.016 01:25:49 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:37.016 01:25:49 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:37.016 01:25:49 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.016 No valid GPT data, bailing 00:04:37.016 01:25:49 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.016 01:25:49 -- scripts/common.sh@393 -- # pt= 00:04:37.016 01:25:49 -- scripts/common.sh@394 -- # return 1 00:04:37.016 01:25:49 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.017 1+0 records in 00:04:37.017 1+0 records out 00:04:37.017 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00252822 s, 415 MB/s 00:04:37.017 01:25:49 -- spdk/autotest.sh@129 -- # sync 00:04:37.017 01:25:49 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.017 01:25:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.017 01:25:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:38.917 01:25:51 -- spdk/autotest.sh@135 -- # uname -s 00:04:38.917 01:25:51 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:38.917 01:25:51 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:38.917 01:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.917 01:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.917 01:25:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.917 ************************************ 00:04:38.917 START TEST setup.sh 00:04:38.917 ************************************ 00:04:38.917 01:25:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:38.917 * Looking for test storage... 00:04:38.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.917 01:25:51 -- setup/test-setup.sh@10 -- # uname -s 00:04:38.917 01:25:51 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:38.917 01:25:51 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:38.917 01:25:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:38.917 01:25:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:38.917 01:25:51 -- common/autotest_common.sh@10 -- # set +x 00:04:38.917 ************************************ 00:04:38.917 START TEST acl 00:04:38.917 ************************************ 00:04:38.917 01:25:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:38.917 * Looking for test storage... 00:04:38.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:38.917 01:25:51 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:38.917 01:25:51 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:38.917 01:25:51 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:38.917 01:25:51 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:38.918 01:25:51 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.918 01:25:51 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:38.918 01:25:51 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:38.918 01:25:51 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.918 01:25:51 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.918 01:25:51 -- setup/acl.sh@12 -- # devs=() 00:04:38.918 01:25:51 -- setup/acl.sh@12 -- # declare -a devs 00:04:38.918 01:25:51 -- setup/acl.sh@13 -- # drivers=() 00:04:38.918 01:25:51 -- setup/acl.sh@13 -- # declare -A drivers 00:04:38.918 01:25:51 -- setup/acl.sh@51 -- # setup reset 00:04:38.918 01:25:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:38.918 01:25:51 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:40.295 01:25:53 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:40.295 01:25:53 -- setup/acl.sh@16 -- # local dev driver 00:04:40.295 01:25:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:40.295 01:25:53 -- setup/acl.sh@15 -- # setup output status 00:04:40.295 01:25:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.295 01:25:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:41.228 Hugepages 00:04:41.228 node hugesize free / total 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 00:04:41.228 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # continue 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:41.228 01:25:54 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:41.228 01:25:54 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:41.228 01:25:54 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:41.228 01:25:54 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.228 01:25:54 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:41.228 01:25:54 -- setup/acl.sh@54 -- # run_test denied denied 00:04:41.228 01:25:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:41.228 01:25:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:41.228 01:25:54 -- common/autotest_common.sh@10 -- # set +x 00:04:41.228 ************************************ 00:04:41.228 START TEST denied 00:04:41.228 ************************************ 00:04:41.228 01:25:54 -- common/autotest_common.sh@1104 -- # denied 00:04:41.228 01:25:54 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:41.228 01:25:54 -- setup/acl.sh@38 -- # setup output config 00:04:41.228 01:25:54 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:41.228 01:25:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.228 01:25:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.601 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:42.601 01:25:55 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:42.601 01:25:55 -- setup/acl.sh@28 -- # local dev driver 00:04:42.601 01:25:55 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:42.601 01:25:55 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:42.601 01:25:55 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:42.602 01:25:55 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:42.602 01:25:55 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:42.602 01:25:55 -- setup/acl.sh@41 -- # setup reset 00:04:42.602 01:25:55 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:42.602 01:25:55 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:45.130 00:04:45.130 real 0m3.712s 00:04:45.131 user 0m1.091s 00:04:45.131 sys 0m1.713s 00:04:45.131 01:25:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.131 01:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:45.131 ************************************ 00:04:45.131 END TEST denied 00:04:45.131 ************************************ 00:04:45.131 01:25:57 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:45.131 01:25:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:45.131 01:25:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:45.131 01:25:57 -- common/autotest_common.sh@10 -- # set +x 00:04:45.131 ************************************ 00:04:45.131 START TEST allowed 00:04:45.131 ************************************ 00:04:45.131 01:25:57 -- common/autotest_common.sh@1104 -- # allowed 00:04:45.131 01:25:57 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:45.131 01:25:57 -- setup/acl.sh@45 -- # setup output config 00:04:45.131 01:25:57 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:45.131 01:25:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.131 01:25:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.664 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:47.664 01:26:00 -- setup/acl.sh@47 -- # verify 00:04:47.664 01:26:00 -- setup/acl.sh@28 -- # local dev driver 00:04:47.664 01:26:00 -- setup/acl.sh@48 -- # setup reset 00:04:47.664 01:26:00 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.664 01:26:00 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.041 00:04:49.041 real 0m3.949s 00:04:49.041 user 0m1.018s 00:04:49.041 sys 0m1.766s 00:04:49.041 01:26:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.041 01:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.041 ************************************ 00:04:49.041 END TEST allowed 00:04:49.041 ************************************ 00:04:49.041 00:04:49.041 real 0m10.169s 00:04:49.041 user 0m3.138s 00:04:49.041 sys 0m5.048s 00:04:49.041 01:26:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.041 01:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.041 ************************************ 00:04:49.041 END TEST acl 00:04:49.041 ************************************ 00:04:49.041 01:26:01 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.042 01:26:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.042 01:26:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.042 01:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.042 ************************************ 00:04:49.042 START TEST hugepages 00:04:49.042 ************************************ 00:04:49.042 01:26:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:49.042 * Looking for test storage... 00:04:49.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:49.042 01:26:01 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:49.042 01:26:01 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:49.042 01:26:01 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:49.042 01:26:01 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:49.042 01:26:01 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:49.042 01:26:01 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:49.042 01:26:01 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:49.042 01:26:01 -- setup/common.sh@18 -- # local node= 00:04:49.042 01:26:01 -- setup/common.sh@19 -- # local var val 00:04:49.042 01:26:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.042 01:26:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.042 01:26:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.042 01:26:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.042 01:26:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.042 01:26:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41233856 kB' 'MemAvailable: 44745400 kB' 'Buffers: 2704 kB' 'Cached: 12750140 kB' 'SwapCached: 0 kB' 'Active: 9681672 kB' 'Inactive: 3508456 kB' 'Active(anon): 9286624 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 440540 kB' 'Mapped: 202388 kB' 'Shmem: 8849340 kB' 'KReclaimable: 203324 kB' 'Slab: 587756 kB' 'SReclaimable: 203324 kB' 'SUnreclaim: 384432 kB' 'KernelStack: 12720 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 10442452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196904 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.042 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.042 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # continue 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.043 01:26:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.043 01:26:01 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:49.043 01:26:01 -- setup/common.sh@33 -- # echo 2048 00:04:49.043 01:26:01 -- setup/common.sh@33 -- # return 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:49.043 01:26:01 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:49.043 01:26:01 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:49.043 01:26:01 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:49.043 01:26:01 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:49.043 01:26:01 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:49.043 01:26:01 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:49.043 01:26:01 -- setup/hugepages.sh@207 -- # get_nodes 00:04:49.043 01:26:01 -- setup/hugepages.sh@27 -- # local node 00:04:49.043 01:26:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.043 01:26:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:49.043 01:26:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.043 01:26:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.043 01:26:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.043 01:26:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.043 01:26:01 -- setup/hugepages.sh@208 -- # clear_hp 00:04:49.043 01:26:01 -- setup/hugepages.sh@37 -- # local node hp 00:04:49.043 01:26:01 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.043 01:26:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.043 01:26:01 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.043 01:26:01 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.043 01:26:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.043 01:26:01 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.043 01:26:01 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.043 01:26:01 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.043 01:26:01 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:49.043 01:26:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.043 01:26:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.043 01:26:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.043 ************************************ 00:04:49.043 START TEST default_setup 00:04:49.043 ************************************ 00:04:49.043 01:26:01 -- common/autotest_common.sh@1104 -- # default_setup 00:04:49.043 01:26:01 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.043 01:26:01 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.043 01:26:01 -- setup/hugepages.sh@51 -- # shift 00:04:49.043 01:26:01 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.043 01:26:01 -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.043 01:26:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.043 01:26:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.043 01:26:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.043 01:26:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.043 01:26:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.043 01:26:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.043 01:26:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.043 01:26:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.043 01:26:01 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.043 01:26:01 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.043 01:26:01 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.043 01:26:01 -- setup/hugepages.sh@73 -- # return 0 00:04:49.043 01:26:01 -- setup/hugepages.sh@137 -- # setup output 00:04:49.044 01:26:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.044 01:26:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.424 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.424 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:50.424 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.422 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:51.422 01:26:04 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:51.422 01:26:04 -- setup/hugepages.sh@89 -- # local node 00:04:51.422 01:26:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:51.422 01:26:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:51.422 01:26:04 -- setup/hugepages.sh@92 -- # local surp 00:04:51.422 01:26:04 -- setup/hugepages.sh@93 -- # local resv 00:04:51.422 01:26:04 -- setup/hugepages.sh@94 -- # local anon 00:04:51.422 01:26:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:51.422 01:26:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:51.422 01:26:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:51.422 01:26:04 -- setup/common.sh@18 -- # local node= 00:04:51.422 01:26:04 -- setup/common.sh@19 -- # local var val 00:04:51.422 01:26:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.422 01:26:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.422 01:26:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.422 01:26:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.422 01:26:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.422 01:26:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43369360 kB' 'MemAvailable: 46880788 kB' 'Buffers: 2704 kB' 'Cached: 12750232 kB' 'SwapCached: 0 kB' 'Active: 9699260 kB' 'Inactive: 3508456 kB' 'Active(anon): 9304212 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458104 kB' 'Mapped: 202444 kB' 'Shmem: 8849432 kB' 'KReclaimable: 203092 kB' 'Slab: 587504 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384412 kB' 'KernelStack: 12704 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459484 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196984 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.422 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.422 01:26:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:51.423 01:26:04 -- setup/common.sh@33 -- # echo 0 00:04:51.423 01:26:04 -- setup/common.sh@33 -- # return 0 00:04:51.423 01:26:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:51.423 01:26:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:51.423 01:26:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.423 01:26:04 -- setup/common.sh@18 -- # local node= 00:04:51.423 01:26:04 -- setup/common.sh@19 -- # local var val 00:04:51.423 01:26:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.423 01:26:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.423 01:26:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.423 01:26:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.423 01:26:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.423 01:26:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.423 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.423 01:26:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43375028 kB' 'MemAvailable: 46886456 kB' 'Buffers: 2704 kB' 'Cached: 12750232 kB' 'SwapCached: 0 kB' 'Active: 9699288 kB' 'Inactive: 3508456 kB' 'Active(anon): 9304240 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458228 kB' 'Mapped: 202444 kB' 'Shmem: 8849432 kB' 'KReclaimable: 203092 kB' 'Slab: 587504 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384412 kB' 'KernelStack: 12672 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:51.423 01:26:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.424 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.424 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.425 01:26:04 -- setup/common.sh@33 -- # echo 0 00:04:51.425 01:26:04 -- setup/common.sh@33 -- # return 0 00:04:51.425 01:26:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:51.425 01:26:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:51.425 01:26:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:51.425 01:26:04 -- setup/common.sh@18 -- # local node= 00:04:51.425 01:26:04 -- setup/common.sh@19 -- # local var val 00:04:51.425 01:26:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.425 01:26:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.425 01:26:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.425 01:26:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.425 01:26:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.425 01:26:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43373296 kB' 'MemAvailable: 46884724 kB' 'Buffers: 2704 kB' 'Cached: 12750252 kB' 'SwapCached: 0 kB' 'Active: 9698524 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303476 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457416 kB' 'Mapped: 202352 kB' 'Shmem: 8849452 kB' 'KReclaimable: 203092 kB' 'Slab: 587492 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384400 kB' 'KernelStack: 12656 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196952 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.425 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.425 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.426 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:51.426 01:26:04 -- setup/common.sh@33 -- # echo 0 00:04:51.426 01:26:04 -- setup/common.sh@33 -- # return 0 00:04:51.426 01:26:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:51.426 01:26:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:51.426 nr_hugepages=1024 00:04:51.426 01:26:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:51.426 resv_hugepages=0 00:04:51.426 01:26:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:51.426 surplus_hugepages=0 00:04:51.426 01:26:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:51.426 anon_hugepages=0 00:04:51.426 01:26:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.426 01:26:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:51.426 01:26:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:51.426 01:26:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:51.426 01:26:04 -- setup/common.sh@18 -- # local node= 00:04:51.426 01:26:04 -- setup/common.sh@19 -- # local var val 00:04:51.426 01:26:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.426 01:26:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.426 01:26:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:51.426 01:26:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:51.426 01:26:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.426 01:26:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.426 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43373044 kB' 'MemAvailable: 46884472 kB' 'Buffers: 2704 kB' 'Cached: 12750268 kB' 'SwapCached: 0 kB' 'Active: 9698392 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303344 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457200 kB' 'Mapped: 202352 kB' 'Shmem: 8849468 kB' 'KReclaimable: 203092 kB' 'Slab: 587580 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384488 kB' 'KernelStack: 12688 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459524 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196968 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.427 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.427 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:51.428 01:26:04 -- setup/common.sh@33 -- # echo 1024 00:04:51.428 01:26:04 -- setup/common.sh@33 -- # return 0 00:04:51.428 01:26:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:51.428 01:26:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:51.428 01:26:04 -- setup/hugepages.sh@27 -- # local node 00:04:51.428 01:26:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.428 01:26:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:51.428 01:26:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:51.428 01:26:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:51.428 01:26:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:51.428 01:26:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:51.428 01:26:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:51.428 01:26:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:51.428 01:26:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:51.428 01:26:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:51.428 01:26:04 -- setup/common.sh@18 -- # local node=0 00:04:51.428 01:26:04 -- setup/common.sh@19 -- # local var val 00:04:51.428 01:26:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:51.428 01:26:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:51.428 01:26:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:51.428 01:26:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:51.428 01:26:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:51.428 01:26:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25843732 kB' 'MemUsed: 6986152 kB' 'SwapCached: 0 kB' 'Active: 3578968 kB' 'Inactive: 156516 kB' 'Active(anon): 3417676 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3534828 kB' 'Mapped: 72572 kB' 'AnonPages: 203896 kB' 'Shmem: 3217020 kB' 'KernelStack: 6984 kB' 'PageTables: 3600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96944 kB' 'Slab: 324444 kB' 'SReclaimable: 96944 kB' 'SUnreclaim: 227500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.428 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.428 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # continue 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:51.429 01:26:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:51.429 01:26:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:51.429 01:26:04 -- setup/common.sh@33 -- # echo 0 00:04:51.429 01:26:04 -- setup/common.sh@33 -- # return 0 00:04:51.429 01:26:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:51.429 01:26:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:51.429 01:26:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:51.429 01:26:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:51.429 01:26:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:51.429 node0=1024 expecting 1024 00:04:51.429 01:26:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:51.429 00:04:51.429 real 0m2.503s 00:04:51.429 user 0m0.656s 00:04:51.429 sys 0m0.976s 00:04:51.429 01:26:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.688 01:26:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.688 ************************************ 00:04:51.688 END TEST default_setup 00:04:51.688 ************************************ 00:04:51.688 01:26:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:51.688 01:26:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.688 01:26:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.688 01:26:04 -- common/autotest_common.sh@10 -- # set +x 00:04:51.688 ************************************ 00:04:51.688 START TEST per_node_1G_alloc 00:04:51.688 ************************************ 00:04:51.688 01:26:04 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:51.688 01:26:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:51.688 01:26:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:51.688 01:26:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:51.688 01:26:04 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:51.688 01:26:04 -- setup/hugepages.sh@51 -- # shift 00:04:51.688 01:26:04 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:51.688 01:26:04 -- setup/hugepages.sh@52 -- # local node_ids 00:04:51.688 01:26:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:51.688 01:26:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:51.688 01:26:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:51.688 01:26:04 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:51.688 01:26:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:51.688 01:26:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:51.688 01:26:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:51.688 01:26:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:51.689 01:26:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:51.689 01:26:04 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:51.689 01:26:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.689 01:26:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:51.689 01:26:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:51.689 01:26:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:51.689 01:26:04 -- setup/hugepages.sh@73 -- # return 0 00:04:51.689 01:26:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:51.689 01:26:04 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:51.689 01:26:04 -- setup/hugepages.sh@146 -- # setup output 00:04:51.689 01:26:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.689 01:26:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.624 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.624 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.624 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.624 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.624 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.624 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.624 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.624 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.624 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.624 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.624 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.624 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.624 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.624 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.624 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.624 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.624 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.887 01:26:05 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:52.887 01:26:05 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:52.887 01:26:05 -- setup/hugepages.sh@89 -- # local node 00:04:52.887 01:26:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.887 01:26:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.887 01:26:05 -- setup/hugepages.sh@92 -- # local surp 00:04:52.887 01:26:05 -- setup/hugepages.sh@93 -- # local resv 00:04:52.887 01:26:05 -- setup/hugepages.sh@94 -- # local anon 00:04:52.887 01:26:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.887 01:26:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.887 01:26:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.887 01:26:05 -- setup/common.sh@18 -- # local node= 00:04:52.887 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.887 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.887 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.887 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.887 01:26:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.887 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.887 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43362328 kB' 'MemAvailable: 46873756 kB' 'Buffers: 2704 kB' 'Cached: 12750316 kB' 'SwapCached: 0 kB' 'Active: 9699192 kB' 'Inactive: 3508456 kB' 'Active(anon): 9304144 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457832 kB' 'Mapped: 202396 kB' 'Shmem: 8849516 kB' 'KReclaimable: 203092 kB' 'Slab: 587528 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384436 kB' 'KernelStack: 12704 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197048 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.887 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.887 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.888 01:26:05 -- setup/common.sh@33 -- # echo 0 00:04:52.888 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.888 01:26:05 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.888 01:26:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.888 01:26:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.888 01:26:05 -- setup/common.sh@18 -- # local node= 00:04:52.888 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.888 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.888 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.888 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.888 01:26:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.888 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.888 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43364628 kB' 'MemAvailable: 46876056 kB' 'Buffers: 2704 kB' 'Cached: 12750320 kB' 'SwapCached: 0 kB' 'Active: 9699040 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303992 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457680 kB' 'Mapped: 202396 kB' 'Shmem: 8849520 kB' 'KReclaimable: 203092 kB' 'Slab: 587496 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384404 kB' 'KernelStack: 12704 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.888 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.888 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.889 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.889 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.890 01:26:05 -- setup/common.sh@33 -- # echo 0 00:04:52.890 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.890 01:26:05 -- setup/hugepages.sh@99 -- # surp=0 00:04:52.890 01:26:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.890 01:26:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.890 01:26:05 -- setup/common.sh@18 -- # local node= 00:04:52.890 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.890 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.890 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.890 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.890 01:26:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.890 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.890 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43363876 kB' 'MemAvailable: 46875304 kB' 'Buffers: 2704 kB' 'Cached: 12750332 kB' 'SwapCached: 0 kB' 'Active: 9698636 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303588 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457264 kB' 'Mapped: 202356 kB' 'Shmem: 8849532 kB' 'KReclaimable: 203092 kB' 'Slab: 587568 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384476 kB' 'KernelStack: 12752 kB' 'PageTables: 8188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.890 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.890 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.891 01:26:05 -- setup/common.sh@33 -- # echo 0 00:04:52.891 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.891 01:26:05 -- setup/hugepages.sh@100 -- # resv=0 00:04:52.891 01:26:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.891 nr_hugepages=1024 00:04:52.891 01:26:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.891 resv_hugepages=0 00:04:52.891 01:26:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.891 surplus_hugepages=0 00:04:52.891 01:26:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.891 anon_hugepages=0 00:04:52.891 01:26:05 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.891 01:26:05 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.891 01:26:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.891 01:26:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.891 01:26:05 -- setup/common.sh@18 -- # local node= 00:04:52.891 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.891 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.891 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.891 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.891 01:26:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.891 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.891 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.891 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.891 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43361356 kB' 'MemAvailable: 46872784 kB' 'Buffers: 2704 kB' 'Cached: 12750344 kB' 'SwapCached: 0 kB' 'Active: 9701008 kB' 'Inactive: 3508456 kB' 'Active(anon): 9305960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459592 kB' 'Mapped: 202792 kB' 'Shmem: 8849544 kB' 'KReclaimable: 203092 kB' 'Slab: 587568 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384476 kB' 'KernelStack: 12688 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10463080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.892 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.892 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.893 01:26:05 -- setup/common.sh@33 -- # echo 1024 00:04:52.893 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.893 01:26:05 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.893 01:26:05 -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.893 01:26:05 -- setup/hugepages.sh@27 -- # local node 00:04:52.893 01:26:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.893 01:26:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.893 01:26:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.893 01:26:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.893 01:26:05 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.893 01:26:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.893 01:26:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.893 01:26:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.893 01:26:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.893 01:26:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.893 01:26:05 -- setup/common.sh@18 -- # local node=0 00:04:52.893 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.893 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.893 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.893 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.893 01:26:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.893 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.893 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26877812 kB' 'MemUsed: 5952072 kB' 'SwapCached: 0 kB' 'Active: 3584372 kB' 'Inactive: 156516 kB' 'Active(anon): 3423080 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3534900 kB' 'Mapped: 72728 kB' 'AnonPages: 209152 kB' 'Shmem: 3217092 kB' 'KernelStack: 6984 kB' 'PageTables: 3576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96944 kB' 'Slab: 324348 kB' 'SReclaimable: 96944 kB' 'SUnreclaim: 227404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.893 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.893 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@33 -- # echo 0 00:04:52.894 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.894 01:26:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.894 01:26:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.894 01:26:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.894 01:26:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.894 01:26:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.894 01:26:05 -- setup/common.sh@18 -- # local node=1 00:04:52.894 01:26:05 -- setup/common.sh@19 -- # local var val 00:04:52.894 01:26:05 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.894 01:26:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.894 01:26:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.894 01:26:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.894 01:26:05 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.894 01:26:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16477496 kB' 'MemUsed: 11234328 kB' 'SwapCached: 0 kB' 'Active: 6119976 kB' 'Inactive: 3351940 kB' 'Active(anon): 5886220 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9218164 kB' 'Mapped: 130412 kB' 'AnonPages: 253840 kB' 'Shmem: 5632468 kB' 'KernelStack: 5752 kB' 'PageTables: 4588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106148 kB' 'Slab: 263220 kB' 'SReclaimable: 106148 kB' 'SUnreclaim: 157072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.894 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.894 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # continue 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.895 01:26:05 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.895 01:26:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.895 01:26:05 -- setup/common.sh@33 -- # echo 0 00:04:52.895 01:26:05 -- setup/common.sh@33 -- # return 0 00:04:52.895 01:26:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.895 01:26:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.895 01:26:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.895 01:26:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.895 01:26:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.895 node0=512 expecting 512 00:04:52.895 01:26:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.895 01:26:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.895 01:26:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.895 01:26:05 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:52.895 node1=512 expecting 512 00:04:52.895 01:26:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:52.895 00:04:52.895 real 0m1.388s 00:04:52.895 user 0m0.545s 00:04:52.895 sys 0m0.807s 00:04:52.895 01:26:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.895 01:26:05 -- common/autotest_common.sh@10 -- # set +x 00:04:52.895 ************************************ 00:04:52.895 END TEST per_node_1G_alloc 00:04:52.895 ************************************ 00:04:52.895 01:26:05 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:52.895 01:26:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:52.895 01:26:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:52.895 01:26:05 -- common/autotest_common.sh@10 -- # set +x 00:04:52.895 ************************************ 00:04:52.895 START TEST even_2G_alloc 00:04:52.895 ************************************ 00:04:52.895 01:26:05 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:52.895 01:26:05 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:52.895 01:26:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.895 01:26:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:52.895 01:26:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.895 01:26:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.895 01:26:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:52.896 01:26:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:52.896 01:26:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.896 01:26:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.896 01:26:05 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.896 01:26:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.896 01:26:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.896 01:26:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:52.896 01:26:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:52.896 01:26:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.896 01:26:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.896 01:26:05 -- setup/hugepages.sh@83 -- # : 512 00:04:52.896 01:26:05 -- setup/hugepages.sh@84 -- # : 1 00:04:52.896 01:26:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.896 01:26:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:52.896 01:26:05 -- setup/hugepages.sh@83 -- # : 0 00:04:52.896 01:26:05 -- setup/hugepages.sh@84 -- # : 0 00:04:52.896 01:26:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:52.896 01:26:05 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:52.896 01:26:05 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:52.896 01:26:05 -- setup/hugepages.sh@153 -- # setup output 00:04:52.896 01:26:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.896 01:26:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.276 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.276 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.276 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.276 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.276 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.276 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.276 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.276 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.276 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.276 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.276 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.276 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.276 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.276 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.276 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.276 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.276 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.276 01:26:07 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:54.276 01:26:07 -- setup/hugepages.sh@89 -- # local node 00:04:54.276 01:26:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.276 01:26:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.276 01:26:07 -- setup/hugepages.sh@92 -- # local surp 00:04:54.276 01:26:07 -- setup/hugepages.sh@93 -- # local resv 00:04:54.276 01:26:07 -- setup/hugepages.sh@94 -- # local anon 00:04:54.276 01:26:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.276 01:26:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.276 01:26:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.276 01:26:07 -- setup/common.sh@18 -- # local node= 00:04:54.276 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.276 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.276 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.276 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.276 01:26:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.276 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.276 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43364820 kB' 'MemAvailable: 46876248 kB' 'Buffers: 2704 kB' 'Cached: 12750412 kB' 'SwapCached: 0 kB' 'Active: 9699008 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303960 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457604 kB' 'Mapped: 202852 kB' 'Shmem: 8849612 kB' 'KReclaimable: 203092 kB' 'Slab: 587548 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384456 kB' 'KernelStack: 12720 kB' 'PageTables: 8124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197032 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.276 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.276 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.277 01:26:07 -- setup/common.sh@33 -- # echo 0 00:04:54.277 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.277 01:26:07 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.277 01:26:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.277 01:26:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.277 01:26:07 -- setup/common.sh@18 -- # local node= 00:04:54.277 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.277 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.277 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.277 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.277 01:26:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.277 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.277 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43373144 kB' 'MemAvailable: 46884572 kB' 'Buffers: 2704 kB' 'Cached: 12750416 kB' 'SwapCached: 0 kB' 'Active: 9699408 kB' 'Inactive: 3508456 kB' 'Active(anon): 9304360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458020 kB' 'Mapped: 202456 kB' 'Shmem: 8849616 kB' 'KReclaimable: 203092 kB' 'Slab: 587520 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384428 kB' 'KernelStack: 12688 kB' 'PageTables: 8024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.277 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.277 01:26:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.278 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.278 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.279 01:26:07 -- setup/common.sh@33 -- # echo 0 00:04:54.279 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.279 01:26:07 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.279 01:26:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.279 01:26:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.279 01:26:07 -- setup/common.sh@18 -- # local node= 00:04:54.279 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.279 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.279 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.279 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.279 01:26:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.279 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.279 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43373556 kB' 'MemAvailable: 46884984 kB' 'Buffers: 2704 kB' 'Cached: 12750420 kB' 'SwapCached: 0 kB' 'Active: 9699116 kB' 'Inactive: 3508456 kB' 'Active(anon): 9304068 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457704 kB' 'Mapped: 202444 kB' 'Shmem: 8849620 kB' 'KReclaimable: 203092 kB' 'Slab: 587520 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384428 kB' 'KernelStack: 12688 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459960 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.279 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.279 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.280 01:26:07 -- setup/common.sh@33 -- # echo 0 00:04:54.280 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.280 01:26:07 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.280 01:26:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.280 nr_hugepages=1024 00:04:54.280 01:26:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.280 resv_hugepages=0 00:04:54.280 01:26:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.280 surplus_hugepages=0 00:04:54.280 01:26:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.280 anon_hugepages=0 00:04:54.280 01:26:07 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.280 01:26:07 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.280 01:26:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.280 01:26:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.280 01:26:07 -- setup/common.sh@18 -- # local node= 00:04:54.280 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.280 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.280 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.280 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.280 01:26:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.280 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.280 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.280 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.280 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43373624 kB' 'MemAvailable: 46885052 kB' 'Buffers: 2704 kB' 'Cached: 12750440 kB' 'SwapCached: 0 kB' 'Active: 9698880 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303832 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457452 kB' 'Mapped: 202364 kB' 'Shmem: 8849640 kB' 'KReclaimable: 203092 kB' 'Slab: 587528 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384436 kB' 'KernelStack: 12752 kB' 'PageTables: 8144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10459976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197016 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.281 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.281 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.282 01:26:07 -- setup/common.sh@33 -- # echo 1024 00:04:54.282 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.282 01:26:07 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.282 01:26:07 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.282 01:26:07 -- setup/hugepages.sh@27 -- # local node 00:04:54.282 01:26:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.282 01:26:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.282 01:26:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.282 01:26:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.282 01:26:07 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.282 01:26:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.282 01:26:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.282 01:26:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.282 01:26:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.282 01:26:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.282 01:26:07 -- setup/common.sh@18 -- # local node=0 00:04:54.282 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.282 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.282 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.282 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.282 01:26:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.282 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.282 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26879876 kB' 'MemUsed: 5950008 kB' 'SwapCached: 0 kB' 'Active: 3578340 kB' 'Inactive: 156516 kB' 'Active(anon): 3417048 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3534932 kB' 'Mapped: 72584 kB' 'AnonPages: 203080 kB' 'Shmem: 3217124 kB' 'KernelStack: 7016 kB' 'PageTables: 3500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96944 kB' 'Slab: 324440 kB' 'SReclaimable: 96944 kB' 'SUnreclaim: 227496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.282 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.282 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@33 -- # echo 0 00:04:54.283 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.283 01:26:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.283 01:26:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.283 01:26:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.283 01:26:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.283 01:26:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.283 01:26:07 -- setup/common.sh@18 -- # local node=1 00:04:54.283 01:26:07 -- setup/common.sh@19 -- # local var val 00:04:54.283 01:26:07 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.283 01:26:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.283 01:26:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.283 01:26:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.283 01:26:07 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.283 01:26:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16493748 kB' 'MemUsed: 11218076 kB' 'SwapCached: 0 kB' 'Active: 6120832 kB' 'Inactive: 3351940 kB' 'Active(anon): 5887076 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9218240 kB' 'Mapped: 129780 kB' 'AnonPages: 254668 kB' 'Shmem: 5632544 kB' 'KernelStack: 5752 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106148 kB' 'Slab: 263088 kB' 'SReclaimable: 106148 kB' 'SUnreclaim: 156940 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.283 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.283 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # continue 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.284 01:26:07 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.284 01:26:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.284 01:26:07 -- setup/common.sh@33 -- # echo 0 00:04:54.284 01:26:07 -- setup/common.sh@33 -- # return 0 00:04:54.284 01:26:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.284 01:26:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.284 01:26:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.284 01:26:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.284 node0=512 expecting 512 00:04:54.284 01:26:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.284 01:26:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.284 01:26:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.284 01:26:07 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:54.284 node1=512 expecting 512 00:04:54.284 01:26:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.284 00:04:54.284 real 0m1.356s 00:04:54.284 user 0m0.551s 00:04:54.284 sys 0m0.770s 00:04:54.284 01:26:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.284 01:26:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.284 ************************************ 00:04:54.284 END TEST even_2G_alloc 00:04:54.284 ************************************ 00:04:54.284 01:26:07 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:54.284 01:26:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.284 01:26:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.284 01:26:07 -- common/autotest_common.sh@10 -- # set +x 00:04:54.284 ************************************ 00:04:54.284 START TEST odd_alloc 00:04:54.284 ************************************ 00:04:54.284 01:26:07 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:54.284 01:26:07 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:54.284 01:26:07 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:54.284 01:26:07 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:54.284 01:26:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.284 01:26:07 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.284 01:26:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.284 01:26:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:54.284 01:26:07 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.284 01:26:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.284 01:26:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.284 01:26:07 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.284 01:26:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.284 01:26:07 -- setup/hugepages.sh@83 -- # : 513 00:04:54.285 01:26:07 -- setup/hugepages.sh@84 -- # : 1 00:04:54.285 01:26:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.285 01:26:07 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:54.285 01:26:07 -- setup/hugepages.sh@83 -- # : 0 00:04:54.285 01:26:07 -- setup/hugepages.sh@84 -- # : 0 00:04:54.285 01:26:07 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.285 01:26:07 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:54.285 01:26:07 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:54.285 01:26:07 -- setup/hugepages.sh@160 -- # setup output 00:04:54.285 01:26:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.285 01:26:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.665 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.665 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.665 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.665 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.665 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.665 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.665 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.665 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.665 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.665 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.665 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.665 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.665 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.665 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.665 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.665 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.665 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.665 01:26:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:55.665 01:26:08 -- setup/hugepages.sh@89 -- # local node 00:04:55.665 01:26:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.665 01:26:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.665 01:26:08 -- setup/hugepages.sh@92 -- # local surp 00:04:55.665 01:26:08 -- setup/hugepages.sh@93 -- # local resv 00:04:55.665 01:26:08 -- setup/hugepages.sh@94 -- # local anon 00:04:55.665 01:26:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.665 01:26:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.665 01:26:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.665 01:26:08 -- setup/common.sh@18 -- # local node= 00:04:55.665 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.665 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.665 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.665 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.665 01:26:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.665 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.665 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43382604 kB' 'MemAvailable: 46894032 kB' 'Buffers: 2704 kB' 'Cached: 12750500 kB' 'SwapCached: 0 kB' 'Active: 9695328 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300280 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453872 kB' 'Mapped: 201364 kB' 'Shmem: 8849700 kB' 'KReclaimable: 203092 kB' 'Slab: 587488 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384396 kB' 'KernelStack: 12672 kB' 'PageTables: 7836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10446088 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196888 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.665 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.665 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.666 01:26:08 -- setup/common.sh@33 -- # echo 0 00:04:55.666 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.666 01:26:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.666 01:26:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.666 01:26:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.666 01:26:08 -- setup/common.sh@18 -- # local node= 00:04:55.666 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.666 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.666 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.666 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.666 01:26:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.666 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.666 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43382464 kB' 'MemAvailable: 46893892 kB' 'Buffers: 2704 kB' 'Cached: 12750504 kB' 'SwapCached: 0 kB' 'Active: 9695780 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300732 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454284 kB' 'Mapped: 201444 kB' 'Shmem: 8849704 kB' 'KReclaimable: 203092 kB' 'Slab: 587548 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384456 kB' 'KernelStack: 12672 kB' 'PageTables: 7824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10446096 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196840 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.666 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.666 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.667 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.667 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.668 01:26:08 -- setup/common.sh@33 -- # echo 0 00:04:55.668 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.668 01:26:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.668 01:26:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.668 01:26:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.668 01:26:08 -- setup/common.sh@18 -- # local node= 00:04:55.668 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.668 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.668 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.668 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.668 01:26:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.668 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.668 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43383144 kB' 'MemAvailable: 46894572 kB' 'Buffers: 2704 kB' 'Cached: 12750516 kB' 'SwapCached: 0 kB' 'Active: 9695480 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300432 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453952 kB' 'Mapped: 201284 kB' 'Shmem: 8849716 kB' 'KReclaimable: 203092 kB' 'Slab: 587572 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384480 kB' 'KernelStack: 12688 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10446112 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196840 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.668 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.668 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.669 01:26:08 -- setup/common.sh@33 -- # echo 0 00:04:55.669 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.669 01:26:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.669 01:26:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:55.669 nr_hugepages=1025 00:04:55.669 01:26:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.669 resv_hugepages=0 00:04:55.669 01:26:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.669 surplus_hugepages=0 00:04:55.669 01:26:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.669 anon_hugepages=0 00:04:55.669 01:26:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.669 01:26:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:55.669 01:26:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.669 01:26:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.669 01:26:08 -- setup/common.sh@18 -- # local node= 00:04:55.669 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.669 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.669 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.669 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.669 01:26:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.669 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.669 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43383620 kB' 'MemAvailable: 46895048 kB' 'Buffers: 2704 kB' 'Cached: 12750520 kB' 'SwapCached: 0 kB' 'Active: 9695200 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300152 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453668 kB' 'Mapped: 201284 kB' 'Shmem: 8849720 kB' 'KReclaimable: 203092 kB' 'Slab: 587572 kB' 'SReclaimable: 203092 kB' 'SUnreclaim: 384480 kB' 'KernelStack: 12688 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 10446128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196840 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.669 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.669 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.670 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.670 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.671 01:26:08 -- setup/common.sh@33 -- # echo 1025 00:04:55.671 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.671 01:26:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:55.671 01:26:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.671 01:26:08 -- setup/hugepages.sh@27 -- # local node 00:04:55.671 01:26:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.671 01:26:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.671 01:26:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.671 01:26:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:55.671 01:26:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.671 01:26:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.671 01:26:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.671 01:26:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.671 01:26:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.671 01:26:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.671 01:26:08 -- setup/common.sh@18 -- # local node=0 00:04:55.671 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.671 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.671 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.671 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.671 01:26:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.671 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.671 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26893076 kB' 'MemUsed: 5936808 kB' 'SwapCached: 0 kB' 'Active: 3576628 kB' 'Inactive: 156516 kB' 'Active(anon): 3415336 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3534936 kB' 'Mapped: 71520 kB' 'AnonPages: 201340 kB' 'Shmem: 3217128 kB' 'KernelStack: 6952 kB' 'PageTables: 3292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96944 kB' 'Slab: 324416 kB' 'SReclaimable: 96944 kB' 'SUnreclaim: 227472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.671 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.671 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@33 -- # echo 0 00:04:55.672 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.672 01:26:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.672 01:26:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.672 01:26:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.672 01:26:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.672 01:26:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.672 01:26:08 -- setup/common.sh@18 -- # local node=1 00:04:55.672 01:26:08 -- setup/common.sh@19 -- # local var val 00:04:55.672 01:26:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.672 01:26:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.672 01:26:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.672 01:26:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.672 01:26:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.672 01:26:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16489788 kB' 'MemUsed: 11222036 kB' 'SwapCached: 0 kB' 'Active: 6119004 kB' 'Inactive: 3351940 kB' 'Active(anon): 5885248 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9218324 kB' 'Mapped: 129764 kB' 'AnonPages: 252768 kB' 'Shmem: 5632628 kB' 'KernelStack: 5768 kB' 'PageTables: 4664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106148 kB' 'Slab: 263156 kB' 'SReclaimable: 106148 kB' 'SUnreclaim: 157008 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.672 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.672 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # continue 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.673 01:26:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.673 01:26:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.673 01:26:08 -- setup/common.sh@33 -- # echo 0 00:04:55.673 01:26:08 -- setup/common.sh@33 -- # return 0 00:04:55.673 01:26:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.673 01:26:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.673 01:26:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.673 01:26:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.673 01:26:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:55.673 node0=512 expecting 513 00:04:55.673 01:26:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.673 01:26:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.673 01:26:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.673 01:26:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:55.673 node1=513 expecting 512 00:04:55.673 01:26:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:55.673 00:04:55.673 real 0m1.418s 00:04:55.673 user 0m0.563s 00:04:55.673 sys 0m0.818s 00:04:55.673 01:26:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.673 01:26:08 -- common/autotest_common.sh@10 -- # set +x 00:04:55.673 ************************************ 00:04:55.673 END TEST odd_alloc 00:04:55.673 ************************************ 00:04:55.673 01:26:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:55.933 01:26:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.933 01:26:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.933 01:26:08 -- common/autotest_common.sh@10 -- # set +x 00:04:55.933 ************************************ 00:04:55.933 START TEST custom_alloc 00:04:55.933 ************************************ 00:04:55.933 01:26:08 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:55.933 01:26:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:55.933 01:26:08 -- setup/hugepages.sh@169 -- # local node 00:04:55.933 01:26:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:55.933 01:26:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:55.933 01:26:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:55.933 01:26:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:55.933 01:26:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:55.933 01:26:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:55.933 01:26:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.933 01:26:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.933 01:26:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.933 01:26:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:55.933 01:26:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.933 01:26:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.933 01:26:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.933 01:26:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.933 01:26:08 -- setup/hugepages.sh@83 -- # : 256 00:04:55.933 01:26:08 -- setup/hugepages.sh@84 -- # : 1 00:04:55.933 01:26:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:55.933 01:26:08 -- setup/hugepages.sh@83 -- # : 0 00:04:55.933 01:26:08 -- setup/hugepages.sh@84 -- # : 0 00:04:55.933 01:26:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:55.933 01:26:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:55.933 01:26:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:55.933 01:26:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:55.933 01:26:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:55.933 01:26:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:55.933 01:26:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.934 01:26:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.934 01:26:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.934 01:26:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.934 01:26:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.934 01:26:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.934 01:26:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.934 01:26:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.934 01:26:08 -- setup/hugepages.sh@78 -- # return 0 00:04:55.934 01:26:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:55.934 01:26:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.934 01:26:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.934 01:26:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:55.934 01:26:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:55.934 01:26:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:55.934 01:26:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:55.934 01:26:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:55.934 01:26:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:55.934 01:26:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:55.934 01:26:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:55.934 01:26:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:55.934 01:26:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:55.934 01:26:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.934 01:26:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:55.934 01:26:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:55.934 01:26:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:55.934 01:26:08 -- setup/hugepages.sh@78 -- # return 0 00:04:55.934 01:26:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:55.934 01:26:08 -- setup/hugepages.sh@187 -- # setup output 00:04:55.934 01:26:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.934 01:26:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.870 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.870 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:56.870 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.870 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.870 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.870 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.870 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.870 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.870 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:56.870 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:56.870 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:56.870 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:56.870 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:56.870 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:56.870 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:56.870 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:56.870 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.130 01:26:10 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:57.130 01:26:10 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:57.130 01:26:10 -- setup/hugepages.sh@89 -- # local node 00:04:57.130 01:26:10 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.130 01:26:10 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.130 01:26:10 -- setup/hugepages.sh@92 -- # local surp 00:04:57.130 01:26:10 -- setup/hugepages.sh@93 -- # local resv 00:04:57.130 01:26:10 -- setup/hugepages.sh@94 -- # local anon 00:04:57.130 01:26:10 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.130 01:26:10 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.130 01:26:10 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.130 01:26:10 -- setup/common.sh@18 -- # local node= 00:04:57.130 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.130 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.130 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.130 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.130 01:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.130 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.130 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42331660 kB' 'MemAvailable: 45843124 kB' 'Buffers: 2704 kB' 'Cached: 12750600 kB' 'SwapCached: 0 kB' 'Active: 9695192 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300144 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 453652 kB' 'Mapped: 201416 kB' 'Shmem: 8849800 kB' 'KReclaimable: 203164 kB' 'Slab: 587740 kB' 'SReclaimable: 203164 kB' 'SUnreclaim: 384576 kB' 'KernelStack: 12640 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10446444 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196840 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.130 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.130 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.131 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.131 01:26:10 -- setup/common.sh@33 -- # echo 0 00:04:57.131 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.131 01:26:10 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.131 01:26:10 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.131 01:26:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.131 01:26:10 -- setup/common.sh@18 -- # local node= 00:04:57.131 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.131 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.131 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.131 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.131 01:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.131 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.131 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.131 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42333424 kB' 'MemAvailable: 45844876 kB' 'Buffers: 2704 kB' 'Cached: 12750604 kB' 'SwapCached: 0 kB' 'Active: 9695808 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300760 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454308 kB' 'Mapped: 201436 kB' 'Shmem: 8849804 kB' 'KReclaimable: 203140 kB' 'Slab: 587772 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384632 kB' 'KernelStack: 12656 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10446456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196824 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.132 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.132 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.133 01:26:10 -- setup/common.sh@33 -- # echo 0 00:04:57.133 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.133 01:26:10 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.133 01:26:10 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.133 01:26:10 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.133 01:26:10 -- setup/common.sh@18 -- # local node= 00:04:57.133 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.133 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.133 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.133 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.133 01:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.133 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.133 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42333980 kB' 'MemAvailable: 45845432 kB' 'Buffers: 2704 kB' 'Cached: 12750616 kB' 'SwapCached: 0 kB' 'Active: 9695612 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300564 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454040 kB' 'Mapped: 201292 kB' 'Shmem: 8849816 kB' 'KReclaimable: 203140 kB' 'Slab: 587764 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384624 kB' 'KernelStack: 12624 kB' 'PageTables: 7636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10446472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196808 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.133 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.133 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.134 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.134 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.134 01:26:10 -- setup/common.sh@33 -- # echo 0 00:04:57.134 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.134 01:26:10 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.134 01:26:10 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:57.134 nr_hugepages=1536 00:04:57.134 01:26:10 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.134 resv_hugepages=0 00:04:57.134 01:26:10 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.134 surplus_hugepages=0 00:04:57.134 01:26:10 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.134 anon_hugepages=0 00:04:57.134 01:26:10 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.134 01:26:10 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:57.134 01:26:10 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.134 01:26:10 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.134 01:26:10 -- setup/common.sh@18 -- # local node= 00:04:57.134 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.134 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.135 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.135 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.135 01:26:10 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.135 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.135 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42333480 kB' 'MemAvailable: 45844932 kB' 'Buffers: 2704 kB' 'Cached: 12750628 kB' 'SwapCached: 0 kB' 'Active: 9695528 kB' 'Inactive: 3508456 kB' 'Active(anon): 9300480 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454000 kB' 'Mapped: 201292 kB' 'Shmem: 8849828 kB' 'KReclaimable: 203140 kB' 'Slab: 587764 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384624 kB' 'KernelStack: 12608 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 10446488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196808 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.135 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.135 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.136 01:26:10 -- setup/common.sh@33 -- # echo 1536 00:04:57.136 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.136 01:26:10 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:57.136 01:26:10 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.136 01:26:10 -- setup/hugepages.sh@27 -- # local node 00:04:57.136 01:26:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.136 01:26:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.136 01:26:10 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.136 01:26:10 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:57.136 01:26:10 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.136 01:26:10 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.136 01:26:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.136 01:26:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.136 01:26:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.136 01:26:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.136 01:26:10 -- setup/common.sh@18 -- # local node=0 00:04:57.136 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.136 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.136 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.136 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.136 01:26:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.136 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.136 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26897236 kB' 'MemUsed: 5932648 kB' 'SwapCached: 0 kB' 'Active: 3577276 kB' 'Inactive: 156516 kB' 'Active(anon): 3415984 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3534996 kB' 'Mapped: 71528 kB' 'AnonPages: 201952 kB' 'Shmem: 3217188 kB' 'KernelStack: 6968 kB' 'PageTables: 3204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96992 kB' 'Slab: 324580 kB' 'SReclaimable: 96992 kB' 'SUnreclaim: 227588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.136 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.136 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.137 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.137 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@33 -- # echo 0 00:04:57.396 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.396 01:26:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.396 01:26:10 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.396 01:26:10 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.396 01:26:10 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.396 01:26:10 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.396 01:26:10 -- setup/common.sh@18 -- # local node=1 00:04:57.396 01:26:10 -- setup/common.sh@19 -- # local var val 00:04:57.396 01:26:10 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.396 01:26:10 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.396 01:26:10 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.396 01:26:10 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.396 01:26:10 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.396 01:26:10 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15436244 kB' 'MemUsed: 12275580 kB' 'SwapCached: 0 kB' 'Active: 6118412 kB' 'Inactive: 3351940 kB' 'Active(anon): 5884656 kB' 'Inactive(anon): 0 kB' 'Active(file): 233756 kB' 'Inactive(file): 3351940 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9218352 kB' 'Mapped: 129764 kB' 'AnonPages: 252164 kB' 'Shmem: 5632656 kB' 'KernelStack: 5688 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 106148 kB' 'Slab: 263188 kB' 'SReclaimable: 106148 kB' 'SUnreclaim: 157040 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.396 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.396 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # continue 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.397 01:26:10 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.397 01:26:10 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.397 01:26:10 -- setup/common.sh@33 -- # echo 0 00:04:57.397 01:26:10 -- setup/common.sh@33 -- # return 0 00:04:57.397 01:26:10 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.397 01:26:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.397 01:26:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.397 01:26:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.397 01:26:10 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:57.397 node0=512 expecting 512 00:04:57.397 01:26:10 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.397 01:26:10 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.397 01:26:10 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.397 01:26:10 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:57.397 node1=1024 expecting 1024 00:04:57.397 01:26:10 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:57.397 00:04:57.397 real 0m1.493s 00:04:57.397 user 0m0.616s 00:04:57.397 sys 0m0.843s 00:04:57.397 01:26:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.397 01:26:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.397 ************************************ 00:04:57.397 END TEST custom_alloc 00:04:57.397 ************************************ 00:04:57.398 01:26:10 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:57.398 01:26:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.398 01:26:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.398 01:26:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.398 ************************************ 00:04:57.398 START TEST no_shrink_alloc 00:04:57.398 ************************************ 00:04:57.398 01:26:10 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:57.398 01:26:10 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:57.398 01:26:10 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.398 01:26:10 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:57.398 01:26:10 -- setup/hugepages.sh@51 -- # shift 00:04:57.398 01:26:10 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:57.398 01:26:10 -- setup/hugepages.sh@52 -- # local node_ids 00:04:57.398 01:26:10 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.398 01:26:10 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.398 01:26:10 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:57.398 01:26:10 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:57.398 01:26:10 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.398 01:26:10 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.398 01:26:10 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.398 01:26:10 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.398 01:26:10 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.398 01:26:10 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:57.398 01:26:10 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:57.398 01:26:10 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:57.398 01:26:10 -- setup/hugepages.sh@73 -- # return 0 00:04:57.398 01:26:10 -- setup/hugepages.sh@198 -- # setup output 00:04:57.398 01:26:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.398 01:26:10 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.333 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.333 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.333 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.333 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.333 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.333 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.333 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.333 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.333 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.333 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.333 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.333 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.333 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.333 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.333 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.333 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.333 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.595 01:26:11 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:58.595 01:26:11 -- setup/hugepages.sh@89 -- # local node 00:04:58.595 01:26:11 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.595 01:26:11 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.595 01:26:11 -- setup/hugepages.sh@92 -- # local surp 00:04:58.595 01:26:11 -- setup/hugepages.sh@93 -- # local resv 00:04:58.595 01:26:11 -- setup/hugepages.sh@94 -- # local anon 00:04:58.595 01:26:11 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.595 01:26:11 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.595 01:26:11 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.595 01:26:11 -- setup/common.sh@18 -- # local node= 00:04:58.595 01:26:11 -- setup/common.sh@19 -- # local var val 00:04:58.595 01:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.595 01:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.595 01:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.595 01:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.595 01:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.595 01:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43344300 kB' 'MemAvailable: 46855752 kB' 'Buffers: 2704 kB' 'Cached: 12750688 kB' 'SwapCached: 0 kB' 'Active: 9696148 kB' 'Inactive: 3508456 kB' 'Active(anon): 9301100 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454456 kB' 'Mapped: 201492 kB' 'Shmem: 8849888 kB' 'KReclaimable: 203140 kB' 'Slab: 587848 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384708 kB' 'KernelStack: 12736 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10446536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196920 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.595 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.595 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.596 01:26:11 -- setup/common.sh@33 -- # echo 0 00:04:58.596 01:26:11 -- setup/common.sh@33 -- # return 0 00:04:58.596 01:26:11 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.596 01:26:11 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.596 01:26:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.596 01:26:11 -- setup/common.sh@18 -- # local node= 00:04:58.596 01:26:11 -- setup/common.sh@19 -- # local var val 00:04:58.596 01:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.596 01:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.596 01:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.596 01:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.596 01:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.596 01:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43344944 kB' 'MemAvailable: 46856396 kB' 'Buffers: 2704 kB' 'Cached: 12750692 kB' 'SwapCached: 0 kB' 'Active: 9698192 kB' 'Inactive: 3508456 kB' 'Active(anon): 9303144 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456540 kB' 'Mapped: 201884 kB' 'Shmem: 8849892 kB' 'KReclaimable: 203140 kB' 'Slab: 587880 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384740 kB' 'KernelStack: 12784 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10448164 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196872 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.596 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.596 01:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.597 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.597 01:26:11 -- setup/common.sh@33 -- # echo 0 00:04:58.597 01:26:11 -- setup/common.sh@33 -- # return 0 00:04:58.597 01:26:11 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.597 01:26:11 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.597 01:26:11 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.597 01:26:11 -- setup/common.sh@18 -- # local node= 00:04:58.597 01:26:11 -- setup/common.sh@19 -- # local var val 00:04:58.597 01:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.597 01:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.597 01:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.597 01:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.597 01:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.597 01:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.597 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43344276 kB' 'MemAvailable: 46855728 kB' 'Buffers: 2704 kB' 'Cached: 12750704 kB' 'SwapCached: 0 kB' 'Active: 9700544 kB' 'Inactive: 3508456 kB' 'Active(anon): 9305496 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458896 kB' 'Mapped: 201812 kB' 'Shmem: 8849904 kB' 'KReclaimable: 203140 kB' 'Slab: 587880 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384740 kB' 'KernelStack: 12784 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10451348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196840 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.598 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.598 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.599 01:26:11 -- setup/common.sh@33 -- # echo 0 00:04:58.599 01:26:11 -- setup/common.sh@33 -- # return 0 00:04:58.599 01:26:11 -- setup/hugepages.sh@100 -- # resv=0 00:04:58.599 01:26:11 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:58.599 nr_hugepages=1024 00:04:58.599 01:26:11 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:58.599 resv_hugepages=0 00:04:58.599 01:26:11 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:58.599 surplus_hugepages=0 00:04:58.599 01:26:11 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:58.599 anon_hugepages=0 00:04:58.599 01:26:11 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.599 01:26:11 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:58.599 01:26:11 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:58.599 01:26:11 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:58.599 01:26:11 -- setup/common.sh@18 -- # local node= 00:04:58.599 01:26:11 -- setup/common.sh@19 -- # local var val 00:04:58.599 01:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.599 01:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.599 01:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.599 01:26:11 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.599 01:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.599 01:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43344692 kB' 'MemAvailable: 46856144 kB' 'Buffers: 2704 kB' 'Cached: 12750720 kB' 'SwapCached: 0 kB' 'Active: 9701676 kB' 'Inactive: 3508456 kB' 'Active(anon): 9306628 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459964 kB' 'Mapped: 202224 kB' 'Shmem: 8849920 kB' 'KReclaimable: 203140 kB' 'Slab: 587848 kB' 'SReclaimable: 203140 kB' 'SUnreclaim: 384708 kB' 'KernelStack: 12736 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10452696 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196844 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.599 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.599 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.600 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.600 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:58.601 01:26:11 -- setup/common.sh@33 -- # echo 1024 00:04:58.601 01:26:11 -- setup/common.sh@33 -- # return 0 00:04:58.601 01:26:11 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:58.601 01:26:11 -- setup/hugepages.sh@112 -- # get_nodes 00:04:58.601 01:26:11 -- setup/hugepages.sh@27 -- # local node 00:04:58.601 01:26:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.601 01:26:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:58.601 01:26:11 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:58.601 01:26:11 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:58.601 01:26:11 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:58.601 01:26:11 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:58.601 01:26:11 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:58.601 01:26:11 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:58.601 01:26:11 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:58.601 01:26:11 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.601 01:26:11 -- setup/common.sh@18 -- # local node=0 00:04:58.601 01:26:11 -- setup/common.sh@19 -- # local var val 00:04:58.601 01:26:11 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.601 01:26:11 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.601 01:26:11 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:58.601 01:26:11 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:58.601 01:26:11 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.601 01:26:11 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25846344 kB' 'MemUsed: 6983540 kB' 'SwapCached: 0 kB' 'Active: 3577724 kB' 'Inactive: 156516 kB' 'Active(anon): 3416432 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3535084 kB' 'Mapped: 71532 kB' 'AnonPages: 202448 kB' 'Shmem: 3217276 kB' 'KernelStack: 7032 kB' 'PageTables: 3384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96992 kB' 'Slab: 324540 kB' 'SReclaimable: 96992 kB' 'SUnreclaim: 227548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.601 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.601 01:26:11 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.859 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # continue 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.860 01:26:11 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.860 01:26:11 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.860 01:26:11 -- setup/common.sh@33 -- # echo 0 00:04:58.860 01:26:11 -- setup/common.sh@33 -- # return 0 00:04:58.860 01:26:11 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:58.860 01:26:11 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:58.860 01:26:11 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:58.860 01:26:11 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:58.860 01:26:11 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:58.860 node0=1024 expecting 1024 00:04:58.860 01:26:11 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:58.860 01:26:11 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:58.860 01:26:11 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:58.860 01:26:11 -- setup/hugepages.sh@202 -- # setup output 00:04:58.860 01:26:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.860 01:26:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.797 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.797 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.797 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.797 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.797 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.797 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.797 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.797 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.797 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:59.797 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.797 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.797 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.797 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:59.797 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:59.797 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:59.797 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:59.797 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.059 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:00.059 01:26:12 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:00.059 01:26:12 -- setup/hugepages.sh@89 -- # local node 00:05:00.059 01:26:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.059 01:26:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.059 01:26:12 -- setup/hugepages.sh@92 -- # local surp 00:05:00.059 01:26:12 -- setup/hugepages.sh@93 -- # local resv 00:05:00.059 01:26:12 -- setup/hugepages.sh@94 -- # local anon 00:05:00.059 01:26:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.059 01:26:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.059 01:26:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.059 01:26:12 -- setup/common.sh@18 -- # local node= 00:05:00.059 01:26:12 -- setup/common.sh@19 -- # local var val 00:05:00.059 01:26:12 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.059 01:26:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.059 01:26:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.059 01:26:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.059 01:26:12 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.059 01:26:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43329464 kB' 'MemAvailable: 46840912 kB' 'Buffers: 2704 kB' 'Cached: 12750772 kB' 'SwapCached: 0 kB' 'Active: 9697152 kB' 'Inactive: 3508456 kB' 'Active(anon): 9302104 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455428 kB' 'Mapped: 201388 kB' 'Shmem: 8849972 kB' 'KReclaimable: 203132 kB' 'Slab: 587748 kB' 'SReclaimable: 203132 kB' 'SUnreclaim: 384616 kB' 'KernelStack: 12784 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10446752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197000 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:05:00.059 01:26:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:12 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:12 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:12 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.059 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.059 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.060 01:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.060 01:26:13 -- setup/common.sh@33 -- # echo 0 00:05:00.060 01:26:13 -- setup/common.sh@33 -- # return 0 00:05:00.060 01:26:13 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.060 01:26:13 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.060 01:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.060 01:26:13 -- setup/common.sh@18 -- # local node= 00:05:00.060 01:26:13 -- setup/common.sh@19 -- # local var val 00:05:00.060 01:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.060 01:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.060 01:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.060 01:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.060 01:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.060 01:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.060 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43335056 kB' 'MemAvailable: 46846504 kB' 'Buffers: 2704 kB' 'Cached: 12750776 kB' 'SwapCached: 0 kB' 'Active: 9697332 kB' 'Inactive: 3508456 kB' 'Active(anon): 9302284 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455556 kB' 'Mapped: 201388 kB' 'Shmem: 8849976 kB' 'KReclaimable: 203132 kB' 'Slab: 587860 kB' 'SReclaimable: 203132 kB' 'SUnreclaim: 384728 kB' 'KernelStack: 12752 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10446764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196952 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.061 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.061 01:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.062 01:26:13 -- setup/common.sh@33 -- # echo 0 00:05:00.062 01:26:13 -- setup/common.sh@33 -- # return 0 00:05:00.062 01:26:13 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.062 01:26:13 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.062 01:26:13 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.062 01:26:13 -- setup/common.sh@18 -- # local node= 00:05:00.062 01:26:13 -- setup/common.sh@19 -- # local var val 00:05:00.062 01:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.062 01:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.062 01:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.062 01:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.062 01:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.062 01:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43335428 kB' 'MemAvailable: 46846876 kB' 'Buffers: 2704 kB' 'Cached: 12750776 kB' 'SwapCached: 0 kB' 'Active: 9696944 kB' 'Inactive: 3508456 kB' 'Active(anon): 9301896 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455292 kB' 'Mapped: 201308 kB' 'Shmem: 8849976 kB' 'KReclaimable: 203132 kB' 'Slab: 587860 kB' 'SReclaimable: 203132 kB' 'SUnreclaim: 384728 kB' 'KernelStack: 12864 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10446776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196952 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.062 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.062 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.063 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.063 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.063 01:26:13 -- setup/common.sh@33 -- # echo 0 00:05:00.063 01:26:13 -- setup/common.sh@33 -- # return 0 00:05:00.063 01:26:13 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.063 01:26:13 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.063 nr_hugepages=1024 00:05:00.063 01:26:13 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.063 resv_hugepages=0 00:05:00.063 01:26:13 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.063 surplus_hugepages=0 00:05:00.063 01:26:13 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.063 anon_hugepages=0 00:05:00.063 01:26:13 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.063 01:26:13 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.063 01:26:13 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.063 01:26:13 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.063 01:26:13 -- setup/common.sh@18 -- # local node= 00:05:00.063 01:26:13 -- setup/common.sh@19 -- # local var val 00:05:00.063 01:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.063 01:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.063 01:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.063 01:26:13 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.063 01:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.063 01:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43336328 kB' 'MemAvailable: 46847776 kB' 'Buffers: 2704 kB' 'Cached: 12750800 kB' 'SwapCached: 0 kB' 'Active: 9696408 kB' 'Inactive: 3508456 kB' 'Active(anon): 9301360 kB' 'Inactive(anon): 0 kB' 'Active(file): 395048 kB' 'Inactive(file): 3508456 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454636 kB' 'Mapped: 201308 kB' 'Shmem: 8850000 kB' 'KReclaimable: 203132 kB' 'Slab: 587844 kB' 'SReclaimable: 203132 kB' 'SUnreclaim: 384712 kB' 'KernelStack: 12736 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 10446792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196936 kB' 'VmallocChunk: 0 kB' 'Percpu: 37440 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2383452 kB' 'DirectMap2M: 20604928 kB' 'DirectMap1G: 46137344 kB' 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.064 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.064 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.065 01:26:13 -- setup/common.sh@33 -- # echo 1024 00:05:00.065 01:26:13 -- setup/common.sh@33 -- # return 0 00:05:00.065 01:26:13 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.065 01:26:13 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.065 01:26:13 -- setup/hugepages.sh@27 -- # local node 00:05:00.065 01:26:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.065 01:26:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.065 01:26:13 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.065 01:26:13 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.065 01:26:13 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.065 01:26:13 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.065 01:26:13 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.065 01:26:13 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.065 01:26:13 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.065 01:26:13 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.065 01:26:13 -- setup/common.sh@18 -- # local node=0 00:05:00.065 01:26:13 -- setup/common.sh@19 -- # local var val 00:05:00.065 01:26:13 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.065 01:26:13 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.065 01:26:13 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.065 01:26:13 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.065 01:26:13 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.065 01:26:13 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25841716 kB' 'MemUsed: 6988168 kB' 'SwapCached: 0 kB' 'Active: 3578000 kB' 'Inactive: 156516 kB' 'Active(anon): 3416708 kB' 'Inactive(anon): 0 kB' 'Active(file): 161292 kB' 'Inactive(file): 156516 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3535164 kB' 'Mapped: 71540 kB' 'AnonPages: 202572 kB' 'Shmem: 3217356 kB' 'KernelStack: 7064 kB' 'PageTables: 3392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 96984 kB' 'Slab: 324588 kB' 'SReclaimable: 96984 kB' 'SUnreclaim: 227604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.065 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.065 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # continue 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.066 01:26:13 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.066 01:26:13 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.066 01:26:13 -- setup/common.sh@33 -- # echo 0 00:05:00.066 01:26:13 -- setup/common.sh@33 -- # return 0 00:05:00.066 01:26:13 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.066 01:26:13 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.066 01:26:13 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.066 01:26:13 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.066 01:26:13 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.066 node0=1024 expecting 1024 00:05:00.066 01:26:13 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.066 00:05:00.066 real 0m2.844s 00:05:00.066 user 0m1.193s 00:05:00.066 sys 0m1.582s 00:05:00.066 01:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.066 01:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.066 ************************************ 00:05:00.066 END TEST no_shrink_alloc 00:05:00.066 ************************************ 00:05:00.066 01:26:13 -- setup/hugepages.sh@217 -- # clear_hp 00:05:00.066 01:26:13 -- setup/hugepages.sh@37 -- # local node hp 00:05:00.066 01:26:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.066 01:26:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.066 01:26:13 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.066 01:26:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.066 01:26:13 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.066 01:26:13 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.066 01:26:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.066 01:26:13 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.066 01:26:13 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.066 01:26:13 -- setup/hugepages.sh@41 -- # echo 0 00:05:00.066 01:26:13 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.066 01:26:13 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.325 00:05:00.325 real 0m11.262s 00:05:00.325 user 0m4.246s 00:05:00.325 sys 0m5.963s 00:05:00.325 01:26:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.325 01:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.325 ************************************ 00:05:00.325 END TEST hugepages 00:05:00.325 ************************************ 00:05:00.325 01:26:13 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.325 01:26:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:00.325 01:26:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:00.325 01:26:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.325 ************************************ 00:05:00.325 START TEST driver 00:05:00.325 ************************************ 00:05:00.325 01:26:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.325 * Looking for test storage... 00:05:00.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.325 01:26:13 -- setup/driver.sh@68 -- # setup reset 00:05:00.325 01:26:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.325 01:26:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:02.854 01:26:15 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:02.854 01:26:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.854 01:26:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.854 01:26:15 -- common/autotest_common.sh@10 -- # set +x 00:05:02.854 ************************************ 00:05:02.854 START TEST guess_driver 00:05:02.854 ************************************ 00:05:02.854 01:26:15 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:02.854 01:26:15 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:02.854 01:26:15 -- setup/driver.sh@47 -- # local fail=0 00:05:02.854 01:26:15 -- setup/driver.sh@49 -- # pick_driver 00:05:02.854 01:26:15 -- setup/driver.sh@36 -- # vfio 00:05:02.854 01:26:15 -- setup/driver.sh@21 -- # local iommu_grups 00:05:02.854 01:26:15 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:02.854 01:26:15 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:02.854 01:26:15 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:02.854 01:26:15 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:02.854 01:26:15 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:02.854 01:26:15 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:02.854 01:26:15 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:02.854 01:26:15 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:02.854 01:26:15 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:02.854 01:26:15 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:02.854 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:02.854 01:26:15 -- setup/driver.sh@30 -- # return 0 00:05:02.854 01:26:15 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:02.854 01:26:15 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:02.854 01:26:15 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:02.854 01:26:15 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:02.854 Looking for driver=vfio-pci 00:05:02.854 01:26:15 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:02.854 01:26:15 -- setup/driver.sh@45 -- # setup output config 00:05:02.854 01:26:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:02.854 01:26:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:16 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:16 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:16 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.227 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:04.227 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:04.227 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.162 01:26:17 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.162 01:26:17 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.162 01:26:17 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.162 01:26:18 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:05.162 01:26:18 -- setup/driver.sh@65 -- # setup reset 00:05:05.162 01:26:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.162 01:26:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:07.721 00:05:07.722 real 0m4.821s 00:05:07.722 user 0m1.116s 00:05:07.722 sys 0m1.834s 00:05:07.722 01:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.722 01:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:07.722 ************************************ 00:05:07.722 END TEST guess_driver 00:05:07.722 ************************************ 00:05:07.722 00:05:07.722 real 0m7.375s 00:05:07.722 user 0m1.672s 00:05:07.722 sys 0m2.855s 00:05:07.722 01:26:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.722 01:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:07.722 ************************************ 00:05:07.722 END TEST driver 00:05:07.722 ************************************ 00:05:07.722 01:26:20 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.722 01:26:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:07.722 01:26:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.722 01:26:20 -- common/autotest_common.sh@10 -- # set +x 00:05:07.722 ************************************ 00:05:07.722 START TEST devices 00:05:07.722 ************************************ 00:05:07.722 01:26:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:07.722 * Looking for test storage... 00:05:07.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:07.722 01:26:20 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:07.722 01:26:20 -- setup/devices.sh@192 -- # setup reset 00:05:07.722 01:26:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:07.722 01:26:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.097 01:26:22 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.098 01:26:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:09.098 01:26:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:09.098 01:26:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:09.098 01:26:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.098 01:26:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:09.098 01:26:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:09.098 01:26:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.098 01:26:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.098 01:26:22 -- setup/devices.sh@196 -- # blocks=() 00:05:09.098 01:26:22 -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.098 01:26:22 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.098 01:26:22 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.098 01:26:22 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.098 01:26:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.098 01:26:22 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.098 01:26:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.098 01:26:22 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:09.098 01:26:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:09.098 01:26:22 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.098 01:26:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:09.098 01:26:22 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.098 No valid GPT data, bailing 00:05:09.098 01:26:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.356 01:26:22 -- scripts/common.sh@393 -- # pt= 00:05:09.356 01:26:22 -- scripts/common.sh@394 -- # return 1 00:05:09.356 01:26:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.356 01:26:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.356 01:26:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.356 01:26:22 -- setup/common.sh@80 -- # echo 1000204886016 00:05:09.356 01:26:22 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:09.356 01:26:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.356 01:26:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:09.356 01:26:22 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.356 01:26:22 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.356 01:26:22 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.356 01:26:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.356 01:26:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.356 01:26:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.356 ************************************ 00:05:09.356 START TEST nvme_mount 00:05:09.356 ************************************ 00:05:09.356 01:26:22 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:09.356 01:26:22 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.356 01:26:22 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.356 01:26:22 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.356 01:26:22 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.356 01:26:22 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.356 01:26:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.356 01:26:22 -- setup/common.sh@40 -- # local part_no=1 00:05:09.356 01:26:22 -- setup/common.sh@41 -- # local size=1073741824 00:05:09.356 01:26:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.356 01:26:22 -- setup/common.sh@44 -- # parts=() 00:05:09.356 01:26:22 -- setup/common.sh@44 -- # local parts 00:05:09.356 01:26:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.356 01:26:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.356 01:26:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.356 01:26:22 -- setup/common.sh@46 -- # (( part++ )) 00:05:09.356 01:26:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.356 01:26:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.356 01:26:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.356 01:26:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:10.293 Creating new GPT entries in memory. 00:05:10.294 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.294 other utilities. 00:05:10.294 01:26:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.294 01:26:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.294 01:26:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.294 01:26:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.294 01:26:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.230 Creating new GPT entries in memory. 00:05:11.230 The operation has completed successfully. 00:05:11.230 01:26:24 -- setup/common.sh@57 -- # (( part++ )) 00:05:11.230 01:26:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.230 01:26:24 -- setup/common.sh@62 -- # wait 3644419 00:05:11.230 01:26:24 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.230 01:26:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:11.230 01:26:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.230 01:26:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:11.230 01:26:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:11.230 01:26:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.230 01:26:24 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.230 01:26:24 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.230 01:26:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:11.230 01:26:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.230 01:26:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.230 01:26:24 -- setup/devices.sh@53 -- # local found=0 00:05:11.230 01:26:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.230 01:26:24 -- setup/devices.sh@56 -- # : 00:05:11.230 01:26:24 -- setup/devices.sh@59 -- # local pci status 00:05:11.230 01:26:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.230 01:26:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.230 01:26:24 -- setup/devices.sh@47 -- # setup output config 00:05:11.230 01:26:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.230 01:26:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:12.607 01:26:25 -- setup/devices.sh@63 -- # found=1 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.607 01:26:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:12.607 01:26:25 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:12.607 01:26:25 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.607 01:26:25 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:12.607 01:26:25 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:12.607 01:26:25 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:12.607 01:26:25 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.607 01:26:25 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.607 01:26:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:12.607 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:12.607 01:26:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:12.607 01:26:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:12.865 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:12.865 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:12.865 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:12.865 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:12.865 01:26:25 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:12.865 01:26:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:12.865 01:26:25 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:12.865 01:26:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:12.865 01:26:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:12.865 01:26:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.124 01:26:25 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.124 01:26:25 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.124 01:26:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:13.124 01:26:25 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.124 01:26:25 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.125 01:26:25 -- setup/devices.sh@53 -- # local found=0 00:05:13.125 01:26:25 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.125 01:26:25 -- setup/devices.sh@56 -- # : 00:05:13.125 01:26:25 -- setup/devices.sh@59 -- # local pci status 00:05:13.125 01:26:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.125 01:26:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.125 01:26:25 -- setup/devices.sh@47 -- # setup output config 00:05:13.125 01:26:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.125 01:26:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:14.061 01:26:27 -- setup/devices.sh@63 -- # found=1 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.062 01:26:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.062 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.321 01:26:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.321 01:26:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.321 01:26:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.321 01:26:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.321 01:26:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.321 01:26:27 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.321 01:26:27 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:14.321 01:26:27 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:14.321 01:26:27 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:14.321 01:26:27 -- setup/devices.sh@50 -- # local mount_point= 00:05:14.321 01:26:27 -- setup/devices.sh@51 -- # local test_file= 00:05:14.321 01:26:27 -- setup/devices.sh@53 -- # local found=0 00:05:14.321 01:26:27 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:14.321 01:26:27 -- setup/devices.sh@59 -- # local pci status 00:05:14.321 01:26:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.321 01:26:27 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.321 01:26:27 -- setup/devices.sh@47 -- # setup output config 00:05:14.321 01:26:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.321 01:26:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:15.697 01:26:28 -- setup/devices.sh@63 -- # found=1 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.697 01:26:28 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.697 01:26:28 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:15.697 01:26:28 -- setup/devices.sh@68 -- # return 0 00:05:15.697 01:26:28 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:15.697 01:26:28 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.697 01:26:28 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.697 01:26:28 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.697 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.697 00:05:15.697 real 0m6.414s 00:05:15.697 user 0m1.556s 00:05:15.697 sys 0m2.490s 00:05:15.697 01:26:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.697 01:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.698 ************************************ 00:05:15.698 END TEST nvme_mount 00:05:15.698 ************************************ 00:05:15.698 01:26:28 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:15.698 01:26:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.698 01:26:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.698 01:26:28 -- common/autotest_common.sh@10 -- # set +x 00:05:15.698 ************************************ 00:05:15.698 START TEST dm_mount 00:05:15.698 ************************************ 00:05:15.698 01:26:28 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:15.698 01:26:28 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:15.698 01:26:28 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:15.698 01:26:28 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:15.698 01:26:28 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:15.698 01:26:28 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:15.698 01:26:28 -- setup/common.sh@40 -- # local part_no=2 00:05:15.698 01:26:28 -- setup/common.sh@41 -- # local size=1073741824 00:05:15.698 01:26:28 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:15.698 01:26:28 -- setup/common.sh@44 -- # parts=() 00:05:15.698 01:26:28 -- setup/common.sh@44 -- # local parts 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.698 01:26:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.698 01:26:28 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part++ )) 00:05:15.698 01:26:28 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:15.698 01:26:28 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:15.698 01:26:28 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:15.698 01:26:28 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:16.634 Creating new GPT entries in memory. 00:05:16.634 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:16.634 other utilities. 00:05:16.634 01:26:29 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:16.634 01:26:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:16.634 01:26:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:16.634 01:26:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:16.634 01:26:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:18.011 Creating new GPT entries in memory. 00:05:18.011 The operation has completed successfully. 00:05:18.011 01:26:30 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.011 01:26:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.011 01:26:30 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.011 01:26:30 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.011 01:26:30 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:18.946 The operation has completed successfully. 00:05:18.946 01:26:31 -- setup/common.sh@57 -- # (( part++ )) 00:05:18.946 01:26:31 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.946 01:26:31 -- setup/common.sh@62 -- # wait 3646874 00:05:18.946 01:26:31 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:18.946 01:26:31 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.946 01:26:31 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.946 01:26:31 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:18.947 01:26:31 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:18.947 01:26:31 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.947 01:26:31 -- setup/devices.sh@161 -- # break 00:05:18.947 01:26:31 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.947 01:26:31 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:18.947 01:26:31 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:18.947 01:26:31 -- setup/devices.sh@166 -- # dm=dm-0 00:05:18.947 01:26:31 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:18.947 01:26:31 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:18.947 01:26:31 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.947 01:26:31 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:18.947 01:26:31 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.947 01:26:31 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:18.947 01:26:31 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:18.947 01:26:31 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.947 01:26:31 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.947 01:26:31 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:18.947 01:26:31 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:18.947 01:26:31 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:18.947 01:26:31 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:18.947 01:26:31 -- setup/devices.sh@53 -- # local found=0 00:05:18.947 01:26:31 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:18.947 01:26:31 -- setup/devices.sh@56 -- # : 00:05:18.947 01:26:31 -- setup/devices.sh@59 -- # local pci status 00:05:18.947 01:26:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:18.947 01:26:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:18.947 01:26:31 -- setup/devices.sh@47 -- # setup output config 00:05:18.947 01:26:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.947 01:26:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:19.882 01:26:32 -- setup/devices.sh@63 -- # found=1 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.882 01:26:32 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:19.882 01:26:32 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.142 01:26:33 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:20.142 01:26:33 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:20.142 01:26:33 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.142 01:26:33 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.142 01:26:33 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.142 01:26:33 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.142 01:26:33 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:20.142 01:26:33 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:20.142 01:26:33 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:20.142 01:26:33 -- setup/devices.sh@50 -- # local mount_point= 00:05:20.142 01:26:33 -- setup/devices.sh@51 -- # local test_file= 00:05:20.142 01:26:33 -- setup/devices.sh@53 -- # local found=0 00:05:20.142 01:26:33 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:20.142 01:26:33 -- setup/devices.sh@59 -- # local pci status 00:05:20.142 01:26:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.142 01:26:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:20.142 01:26:33 -- setup/devices.sh@47 -- # setup output config 00:05:20.142 01:26:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.142 01:26:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:21.079 01:26:34 -- setup/devices.sh@63 -- # found=1 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.079 01:26:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.079 01:26:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.338 01:26:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.338 01:26:34 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:21.338 01:26:34 -- setup/devices.sh@68 -- # return 0 00:05:21.338 01:26:34 -- setup/devices.sh@187 -- # cleanup_dm 00:05:21.338 01:26:34 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.338 01:26:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.339 01:26:34 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:21.339 01:26:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.339 01:26:34 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:21.339 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:21.339 01:26:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.339 01:26:34 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:21.339 00:05:21.339 real 0m5.740s 00:05:21.339 user 0m0.967s 00:05:21.339 sys 0m1.655s 00:05:21.339 01:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.339 01:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.339 ************************************ 00:05:21.339 END TEST dm_mount 00:05:21.339 ************************************ 00:05:21.339 01:26:34 -- setup/devices.sh@1 -- # cleanup 00:05:21.339 01:26:34 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:21.339 01:26:34 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:21.339 01:26:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.339 01:26:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:21.339 01:26:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.339 01:26:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:21.596 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:21.596 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:21.596 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:21.596 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:21.596 01:26:34 -- setup/devices.sh@12 -- # cleanup_dm 00:05:21.596 01:26:34 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.596 01:26:34 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:21.596 01:26:34 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:21.596 01:26:34 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:21.596 01:26:34 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:21.596 01:26:34 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:21.596 00:05:21.596 real 0m14.103s 00:05:21.596 user 0m3.206s 00:05:21.596 sys 0m5.185s 00:05:21.596 01:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.596 01:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.596 ************************************ 00:05:21.596 END TEST devices 00:05:21.596 ************************************ 00:05:21.854 00:05:21.854 real 0m43.060s 00:05:21.854 user 0m12.322s 00:05:21.854 sys 0m19.162s 00:05:21.854 01:26:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.854 01:26:34 -- common/autotest_common.sh@10 -- # set +x 00:05:21.854 ************************************ 00:05:21.854 END TEST setup.sh 00:05:21.854 ************************************ 00:05:21.854 01:26:34 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:22.792 Hugepages 00:05:22.792 node hugesize free / total 00:05:22.792 node0 1048576kB 0 / 0 00:05:22.792 node0 2048kB 2048 / 2048 00:05:22.792 node1 1048576kB 0 / 0 00:05:22.792 node1 2048kB 0 / 0 00:05:22.792 00:05:22.792 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:22.792 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:22.792 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:22.792 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:22.792 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:22.792 01:26:35 -- spdk/autotest.sh@141 -- # uname -s 00:05:22.792 01:26:35 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:22.792 01:26:35 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:22.792 01:26:35 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:24.167 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:24.167 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:24.167 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:24.167 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:24.167 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:24.168 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:24.168 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:24.168 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:24.168 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:25.147 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:25.147 01:26:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:26.086 01:26:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:26.086 01:26:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:26.086 01:26:39 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:26.086 01:26:39 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:26.086 01:26:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:26.086 01:26:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:26.086 01:26:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:26.086 01:26:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:26.086 01:26:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:26.086 01:26:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:26.086 01:26:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:26.086 01:26:39 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:27.461 Waiting for block devices as requested 00:05:27.461 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:27.461 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:27.720 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:27.720 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:27.720 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:27.720 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:27.978 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:27.978 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:27.978 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:27.978 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:28.236 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:28.236 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:28.236 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:28.495 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:28.495 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:28.495 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:28.495 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:28.753 01:26:41 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:28.753 01:26:41 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:28.753 01:26:41 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:28.753 01:26:41 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:28.753 01:26:41 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:28.753 01:26:41 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:28.753 01:26:41 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:28.753 01:26:41 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:28.753 01:26:41 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:28.753 01:26:41 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:28.753 01:26:41 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:28.753 01:26:41 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:28.753 01:26:41 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:28.753 01:26:41 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:28.753 01:26:41 -- common/autotest_common.sh@1542 -- # continue 00:05:28.753 01:26:41 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:28.753 01:26:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:28.753 01:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.753 01:26:41 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:28.753 01:26:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:28.753 01:26:41 -- common/autotest_common.sh@10 -- # set +x 00:05:28.753 01:26:41 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:30.126 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:30.126 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:30.126 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:31.062 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:31.062 01:26:43 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:31.062 01:26:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.062 01:26:43 -- common/autotest_common.sh@10 -- # set +x 00:05:31.062 01:26:44 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:31.062 01:26:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:31.062 01:26:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:31.062 01:26:44 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:31.062 01:26:44 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:31.062 01:26:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:31.062 01:26:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:31.062 01:26:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:31.062 01:26:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:31.062 01:26:44 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:31.062 01:26:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:31.062 01:26:44 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:31.062 01:26:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:31.062 01:26:44 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:31.062 01:26:44 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:31.062 01:26:44 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:31.062 01:26:44 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:31.062 01:26:44 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:31.062 01:26:44 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:31.062 01:26:44 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:31.062 01:26:44 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3652176 00:05:31.062 01:26:44 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.062 01:26:44 -- common/autotest_common.sh@1583 -- # waitforlisten 3652176 00:05:31.062 01:26:44 -- common/autotest_common.sh@819 -- # '[' -z 3652176 ']' 00:05:31.062 01:26:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.062 01:26:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.062 01:26:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.062 01:26:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.062 01:26:44 -- common/autotest_common.sh@10 -- # set +x 00:05:31.062 [2024-07-23 01:26:44.149024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:31.062 [2024-07-23 01:26:44.149126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3652176 ] 00:05:31.320 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.320 [2024-07-23 01:26:44.212724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.320 [2024-07-23 01:26:44.300045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.320 [2024-07-23 01:26:44.300239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.254 01:26:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.254 01:26:45 -- common/autotest_common.sh@852 -- # return 0 00:05:32.254 01:26:45 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:32.254 01:26:45 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:32.254 01:26:45 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:35.537 nvme0n1 00:05:35.537 01:26:48 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:35.537 [2024-07-23 01:26:48.385076] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:35.537 [2024-07-23 01:26:48.385134] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:35.537 request: 00:05:35.537 { 00:05:35.537 "nvme_ctrlr_name": "nvme0", 00:05:35.537 "password": "test", 00:05:35.537 "method": "bdev_nvme_opal_revert", 00:05:35.537 "req_id": 1 00:05:35.537 } 00:05:35.537 Got JSON-RPC error response 00:05:35.537 response: 00:05:35.537 { 00:05:35.537 "code": -32603, 00:05:35.537 "message": "Internal error" 00:05:35.537 } 00:05:35.537 01:26:48 -- common/autotest_common.sh@1589 -- # true 00:05:35.537 01:26:48 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:35.537 01:26:48 -- common/autotest_common.sh@1593 -- # killprocess 3652176 00:05:35.537 01:26:48 -- common/autotest_common.sh@926 -- # '[' -z 3652176 ']' 00:05:35.537 01:26:48 -- common/autotest_common.sh@930 -- # kill -0 3652176 00:05:35.537 01:26:48 -- common/autotest_common.sh@931 -- # uname 00:05:35.537 01:26:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.537 01:26:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3652176 00:05:35.537 01:26:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.537 01:26:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.537 01:26:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3652176' 00:05:35.537 killing process with pid 3652176 00:05:35.537 01:26:48 -- common/autotest_common.sh@945 -- # kill 3652176 00:05:35.537 01:26:48 -- common/autotest_common.sh@950 -- # wait 3652176 00:05:37.439 01:26:50 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:37.439 01:26:50 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:37.439 01:26:50 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.439 01:26:50 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.439 01:26:50 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:37.439 01:26:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.439 01:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 01:26:50 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:37.439 01:26:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.439 01:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.439 01:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 START TEST env 00:05:37.439 ************************************ 00:05:37.439 01:26:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:37.439 * Looking for test storage... 00:05:37.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:37.439 01:26:50 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.439 01:26:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.439 01:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.439 01:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 START TEST env_memory 00:05:37.439 ************************************ 00:05:37.439 01:26:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:37.439 00:05:37.439 00:05:37.439 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.439 http://cunit.sourceforge.net/ 00:05:37.439 00:05:37.439 00:05:37.439 Suite: memory 00:05:37.439 Test: alloc and free memory map ...[2024-07-23 01:26:50.285131] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.439 passed 00:05:37.439 Test: mem map translation ...[2024-07-23 01:26:50.305086] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.439 [2024-07-23 01:26:50.305118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.439 [2024-07-23 01:26:50.305160] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.439 [2024-07-23 01:26:50.305182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.439 passed 00:05:37.439 Test: mem map registration ...[2024-07-23 01:26:50.345887] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.439 [2024-07-23 01:26:50.345909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.439 passed 00:05:37.439 Test: mem map adjacent registrations ...passed 00:05:37.439 00:05:37.439 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.439 suites 1 1 n/a 0 0 00:05:37.439 tests 4 4 4 0 0 00:05:37.439 asserts 152 152 152 0 n/a 00:05:37.439 00:05:37.439 Elapsed time = 0.139 seconds 00:05:37.439 00:05:37.439 real 0m0.145s 00:05:37.439 user 0m0.136s 00:05:37.439 sys 0m0.009s 00:05:37.439 01:26:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.439 01:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 END TEST env_memory 00:05:37.439 ************************************ 00:05:37.439 01:26:50 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.439 01:26:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.439 01:26:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.439 01:26:50 -- common/autotest_common.sh@10 -- # set +x 00:05:37.439 ************************************ 00:05:37.439 START TEST env_vtophys 00:05:37.439 ************************************ 00:05:37.439 01:26:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:37.439 EAL: lib.eal log level changed from notice to debug 00:05:37.439 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.439 EAL: Detected lcore 1 as core 1 on socket 0 00:05:37.439 EAL: Detected lcore 2 as core 2 on socket 0 00:05:37.439 EAL: Detected lcore 3 as core 3 on socket 0 00:05:37.439 EAL: Detected lcore 4 as core 4 on socket 0 00:05:37.439 EAL: Detected lcore 5 as core 5 on socket 0 00:05:37.439 EAL: Detected lcore 6 as core 8 on socket 0 00:05:37.439 EAL: Detected lcore 7 as core 9 on socket 0 00:05:37.439 EAL: Detected lcore 8 as core 10 on socket 0 00:05:37.439 EAL: Detected lcore 9 as core 11 on socket 0 00:05:37.439 EAL: Detected lcore 10 as core 12 on socket 0 00:05:37.439 EAL: Detected lcore 11 as core 13 on socket 0 00:05:37.439 EAL: Detected lcore 12 as core 0 on socket 1 00:05:37.439 EAL: Detected lcore 13 as core 1 on socket 1 00:05:37.439 EAL: Detected lcore 14 as core 2 on socket 1 00:05:37.439 EAL: Detected lcore 15 as core 3 on socket 1 00:05:37.439 EAL: Detected lcore 16 as core 4 on socket 1 00:05:37.439 EAL: Detected lcore 17 as core 5 on socket 1 00:05:37.439 EAL: Detected lcore 18 as core 8 on socket 1 00:05:37.439 EAL: Detected lcore 19 as core 9 on socket 1 00:05:37.439 EAL: Detected lcore 20 as core 10 on socket 1 00:05:37.439 EAL: Detected lcore 21 as core 11 on socket 1 00:05:37.439 EAL: Detected lcore 22 as core 12 on socket 1 00:05:37.439 EAL: Detected lcore 23 as core 13 on socket 1 00:05:37.439 EAL: Detected lcore 24 as core 0 on socket 0 00:05:37.439 EAL: Detected lcore 25 as core 1 on socket 0 00:05:37.439 EAL: Detected lcore 26 as core 2 on socket 0 00:05:37.440 EAL: Detected lcore 27 as core 3 on socket 0 00:05:37.440 EAL: Detected lcore 28 as core 4 on socket 0 00:05:37.440 EAL: Detected lcore 29 as core 5 on socket 0 00:05:37.440 EAL: Detected lcore 30 as core 8 on socket 0 00:05:37.440 EAL: Detected lcore 31 as core 9 on socket 0 00:05:37.440 EAL: Detected lcore 32 as core 10 on socket 0 00:05:37.440 EAL: Detected lcore 33 as core 11 on socket 0 00:05:37.440 EAL: Detected lcore 34 as core 12 on socket 0 00:05:37.440 EAL: Detected lcore 35 as core 13 on socket 0 00:05:37.440 EAL: Detected lcore 36 as core 0 on socket 1 00:05:37.440 EAL: Detected lcore 37 as core 1 on socket 1 00:05:37.440 EAL: Detected lcore 38 as core 2 on socket 1 00:05:37.440 EAL: Detected lcore 39 as core 3 on socket 1 00:05:37.440 EAL: Detected lcore 40 as core 4 on socket 1 00:05:37.440 EAL: Detected lcore 41 as core 5 on socket 1 00:05:37.440 EAL: Detected lcore 42 as core 8 on socket 1 00:05:37.440 EAL: Detected lcore 43 as core 9 on socket 1 00:05:37.440 EAL: Detected lcore 44 as core 10 on socket 1 00:05:37.440 EAL: Detected lcore 45 as core 11 on socket 1 00:05:37.440 EAL: Detected lcore 46 as core 12 on socket 1 00:05:37.440 EAL: Detected lcore 47 as core 13 on socket 1 00:05:37.440 EAL: Maximum logical cores by configuration: 128 00:05:37.440 EAL: Detected CPU lcores: 48 00:05:37.440 EAL: Detected NUMA nodes: 2 00:05:37.440 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:37.440 EAL: Detected shared linkage of DPDK 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:37.440 EAL: Registered [vdev] bus. 00:05:37.440 EAL: bus.vdev log level changed from disabled to notice 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:37.440 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.440 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:37.440 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:37.440 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: Bus pci wants IOVA as 'DC' 00:05:37.440 EAL: Bus vdev wants IOVA as 'DC' 00:05:37.440 EAL: Buses did not request a specific IOVA mode. 00:05:37.440 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:37.440 EAL: Selected IOVA mode 'VA' 00:05:37.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.440 EAL: Probing VFIO support... 00:05:37.440 EAL: IOMMU type 1 (Type 1) is supported 00:05:37.440 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:37.440 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:37.440 EAL: VFIO support initialized 00:05:37.440 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.440 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.440 EAL: Setting up physically contiguous memory... 00:05:37.440 EAL: Setting maximum number of open files to 524288 00:05:37.440 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.440 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:37.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.440 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:37.440 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.440 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:37.440 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:37.440 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.440 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:37.440 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:37.440 EAL: Hugepages will be freed exactly as allocated. 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: TSC frequency is ~2700000 KHz 00:05:37.440 EAL: Main lcore 0 is ready (tid=7f866d77da00;cpuset=[0]) 00:05:37.440 EAL: Trying to obtain current memory policy. 00:05:37.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.440 EAL: Restoring previous memory policy: 0 00:05:37.440 EAL: request: mp_malloc_sync 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: Heap on socket 0 was expanded by 2MB 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:37.440 EAL: Mem event callback 'spdk:(nil)' registered 00:05:37.440 00:05:37.440 00:05:37.440 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.440 http://cunit.sourceforge.net/ 00:05:37.440 00:05:37.440 00:05:37.440 Suite: components_suite 00:05:37.440 Test: vtophys_malloc_test ...passed 00:05:37.440 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:37.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.440 EAL: Restoring previous memory policy: 4 00:05:37.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.440 EAL: request: mp_malloc_sync 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: Heap on socket 0 was expanded by 4MB 00:05:37.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.440 EAL: request: mp_malloc_sync 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.440 EAL: Heap on socket 0 was shrunk by 4MB 00:05:37.440 EAL: Trying to obtain current memory policy. 00:05:37.440 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.440 EAL: Restoring previous memory policy: 4 00:05:37.440 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.440 EAL: request: mp_malloc_sync 00:05:37.440 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was expanded by 6MB 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was shrunk by 6MB 00:05:37.441 EAL: Trying to obtain current memory policy. 00:05:37.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.441 EAL: Restoring previous memory policy: 4 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was expanded by 10MB 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was shrunk by 10MB 00:05:37.441 EAL: Trying to obtain current memory policy. 00:05:37.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.441 EAL: Restoring previous memory policy: 4 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was expanded by 18MB 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was shrunk by 18MB 00:05:37.441 EAL: Trying to obtain current memory policy. 00:05:37.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.441 EAL: Restoring previous memory policy: 4 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was expanded by 34MB 00:05:37.441 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.441 EAL: request: mp_malloc_sync 00:05:37.441 EAL: No shared files mode enabled, IPC is disabled 00:05:37.441 EAL: Heap on socket 0 was shrunk by 34MB 00:05:37.441 EAL: Trying to obtain current memory policy. 00:05:37.441 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.699 EAL: Restoring previous memory policy: 4 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.699 EAL: request: mp_malloc_sync 00:05:37.699 EAL: No shared files mode enabled, IPC is disabled 00:05:37.699 EAL: Heap on socket 0 was expanded by 66MB 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.699 EAL: request: mp_malloc_sync 00:05:37.699 EAL: No shared files mode enabled, IPC is disabled 00:05:37.699 EAL: Heap on socket 0 was shrunk by 66MB 00:05:37.699 EAL: Trying to obtain current memory policy. 00:05:37.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.699 EAL: Restoring previous memory policy: 4 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.699 EAL: request: mp_malloc_sync 00:05:37.699 EAL: No shared files mode enabled, IPC is disabled 00:05:37.699 EAL: Heap on socket 0 was expanded by 130MB 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.699 EAL: request: mp_malloc_sync 00:05:37.699 EAL: No shared files mode enabled, IPC is disabled 00:05:37.699 EAL: Heap on socket 0 was shrunk by 130MB 00:05:37.699 EAL: Trying to obtain current memory policy. 00:05:37.699 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.699 EAL: Restoring previous memory policy: 4 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.699 EAL: request: mp_malloc_sync 00:05:37.699 EAL: No shared files mode enabled, IPC is disabled 00:05:37.699 EAL: Heap on socket 0 was expanded by 258MB 00:05:37.699 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.957 EAL: request: mp_malloc_sync 00:05:37.957 EAL: No shared files mode enabled, IPC is disabled 00:05:37.957 EAL: Heap on socket 0 was shrunk by 258MB 00:05:37.957 EAL: Trying to obtain current memory policy. 00:05:37.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:37.957 EAL: Restoring previous memory policy: 4 00:05:37.957 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.957 EAL: request: mp_malloc_sync 00:05:37.957 EAL: No shared files mode enabled, IPC is disabled 00:05:37.957 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.216 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.216 EAL: request: mp_malloc_sync 00:05:38.216 EAL: No shared files mode enabled, IPC is disabled 00:05:38.216 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.216 EAL: Trying to obtain current memory policy. 00:05:38.216 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.474 EAL: Restoring previous memory policy: 4 00:05:38.474 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.474 EAL: request: mp_malloc_sync 00:05:38.474 EAL: No shared files mode enabled, IPC is disabled 00:05:38.474 EAL: Heap on socket 0 was expanded by 1026MB 00:05:38.732 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.992 EAL: request: mp_malloc_sync 00:05:38.992 EAL: No shared files mode enabled, IPC is disabled 00:05:38.992 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:38.992 passed 00:05:38.992 00:05:38.992 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.992 suites 1 1 n/a 0 0 00:05:38.992 tests 2 2 2 0 0 00:05:38.992 asserts 497 497 497 0 n/a 00:05:38.992 00:05:38.992 Elapsed time = 1.359 seconds 00:05:38.992 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.992 EAL: request: mp_malloc_sync 00:05:38.992 EAL: No shared files mode enabled, IPC is disabled 00:05:38.992 EAL: Heap on socket 0 was shrunk by 2MB 00:05:38.992 EAL: No shared files mode enabled, IPC is disabled 00:05:38.992 EAL: No shared files mode enabled, IPC is disabled 00:05:38.992 EAL: No shared files mode enabled, IPC is disabled 00:05:38.992 00:05:38.992 real 0m1.472s 00:05:38.992 user 0m0.840s 00:05:38.992 sys 0m0.602s 00:05:38.993 01:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.993 01:26:51 -- common/autotest_common.sh@10 -- # set +x 00:05:38.993 ************************************ 00:05:38.993 END TEST env_vtophys 00:05:38.993 ************************************ 00:05:38.993 01:26:51 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.993 01:26:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.993 01:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.993 01:26:51 -- common/autotest_common.sh@10 -- # set +x 00:05:38.993 ************************************ 00:05:38.993 START TEST env_pci 00:05:38.993 ************************************ 00:05:38.993 01:26:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:38.993 00:05:38.993 00:05:38.993 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.993 http://cunit.sourceforge.net/ 00:05:38.993 00:05:38.993 00:05:38.993 Suite: pci 00:05:38.993 Test: pci_hook ...[2024-07-23 01:26:51.931800] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3653211 has claimed it 00:05:38.993 EAL: Cannot find device (10000:00:01.0) 00:05:38.993 EAL: Failed to attach device on primary process 00:05:38.993 passed 00:05:38.993 00:05:38.993 Run Summary: Type Total Ran Passed Failed Inactive 00:05:38.993 suites 1 1 n/a 0 0 00:05:38.993 tests 1 1 1 0 0 00:05:38.993 asserts 25 25 25 0 n/a 00:05:38.993 00:05:38.993 Elapsed time = 0.022 seconds 00:05:38.993 00:05:38.993 real 0m0.035s 00:05:38.993 user 0m0.012s 00:05:38.993 sys 0m0.022s 00:05:38.993 01:26:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.993 01:26:51 -- common/autotest_common.sh@10 -- # set +x 00:05:38.993 ************************************ 00:05:38.993 END TEST env_pci 00:05:38.993 ************************************ 00:05:38.993 01:26:51 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:38.993 01:26:51 -- env/env.sh@15 -- # uname 00:05:38.993 01:26:51 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:38.993 01:26:51 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:38.993 01:26:51 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.993 01:26:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:38.993 01:26:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.993 01:26:51 -- common/autotest_common.sh@10 -- # set +x 00:05:38.993 ************************************ 00:05:38.993 START TEST env_dpdk_post_init 00:05:38.993 ************************************ 00:05:38.993 01:26:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:38.993 EAL: Detected CPU lcores: 48 00:05:38.993 EAL: Detected NUMA nodes: 2 00:05:38.993 EAL: Detected shared linkage of DPDK 00:05:38.993 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:38.993 EAL: Selected IOVA mode 'VA' 00:05:38.993 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.993 EAL: VFIO support initialized 00:05:38.993 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:38.993 EAL: Using IOMMU type 1 (Type 1) 00:05:38.993 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:39.253 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:40.189 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:43.501 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:43.501 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:43.501 Starting DPDK initialization... 00:05:43.501 Starting SPDK post initialization... 00:05:43.501 SPDK NVMe probe 00:05:43.501 Attaching to 0000:88:00.0 00:05:43.501 Attached to 0000:88:00.0 00:05:43.501 Cleaning up... 00:05:43.501 00:05:43.501 real 0m4.404s 00:05:43.501 user 0m3.264s 00:05:43.501 sys 0m0.195s 00:05:43.501 01:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.501 ************************************ 00:05:43.501 END TEST env_dpdk_post_init 00:05:43.501 ************************************ 00:05:43.501 01:26:56 -- env/env.sh@26 -- # uname 00:05:43.501 01:26:56 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:43.501 01:26:56 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.501 01:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.501 01:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.501 ************************************ 00:05:43.501 START TEST env_mem_callbacks 00:05:43.501 ************************************ 00:05:43.501 01:26:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:43.501 EAL: Detected CPU lcores: 48 00:05:43.501 EAL: Detected NUMA nodes: 2 00:05:43.501 EAL: Detected shared linkage of DPDK 00:05:43.501 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:43.501 EAL: Selected IOVA mode 'VA' 00:05:43.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.501 EAL: VFIO support initialized 00:05:43.501 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:43.501 00:05:43.501 00:05:43.501 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.501 http://cunit.sourceforge.net/ 00:05:43.501 00:05:43.501 00:05:43.501 Suite: memory 00:05:43.501 Test: test ... 00:05:43.501 register 0x200000200000 2097152 00:05:43.501 malloc 3145728 00:05:43.501 register 0x200000400000 4194304 00:05:43.501 buf 0x200000500000 len 3145728 PASSED 00:05:43.501 malloc 64 00:05:43.501 buf 0x2000004fff40 len 64 PASSED 00:05:43.501 malloc 4194304 00:05:43.501 register 0x200000800000 6291456 00:05:43.501 buf 0x200000a00000 len 4194304 PASSED 00:05:43.501 free 0x200000500000 3145728 00:05:43.501 free 0x2000004fff40 64 00:05:43.501 unregister 0x200000400000 4194304 PASSED 00:05:43.501 free 0x200000a00000 4194304 00:05:43.501 unregister 0x200000800000 6291456 PASSED 00:05:43.501 malloc 8388608 00:05:43.501 register 0x200000400000 10485760 00:05:43.501 buf 0x200000600000 len 8388608 PASSED 00:05:43.501 free 0x200000600000 8388608 00:05:43.501 unregister 0x200000400000 10485760 PASSED 00:05:43.501 passed 00:05:43.501 00:05:43.501 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.501 suites 1 1 n/a 0 0 00:05:43.501 tests 1 1 1 0 0 00:05:43.501 asserts 15 15 15 0 n/a 00:05:43.501 00:05:43.501 Elapsed time = 0.005 seconds 00:05:43.501 00:05:43.501 real 0m0.052s 00:05:43.501 user 0m0.011s 00:05:43.501 sys 0m0.040s 00:05:43.501 01:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.501 ************************************ 00:05:43.501 END TEST env_mem_callbacks 00:05:43.501 ************************************ 00:05:43.501 00:05:43.501 real 0m6.286s 00:05:43.501 user 0m4.346s 00:05:43.501 sys 0m0.990s 00:05:43.501 01:26:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.501 ************************************ 00:05:43.501 END TEST env 00:05:43.501 ************************************ 00:05:43.501 01:26:56 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.501 01:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.501 01:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.501 ************************************ 00:05:43.501 START TEST rpc 00:05:43.501 ************************************ 00:05:43.501 01:26:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:43.501 * Looking for test storage... 00:05:43.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:43.501 01:26:56 -- rpc/rpc.sh@65 -- # spdk_pid=3653875 00:05:43.501 01:26:56 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:43.501 01:26:56 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.501 01:26:56 -- rpc/rpc.sh@67 -- # waitforlisten 3653875 00:05:43.501 01:26:56 -- common/autotest_common.sh@819 -- # '[' -z 3653875 ']' 00:05:43.501 01:26:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.501 01:26:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.501 01:26:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.501 01:26:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.501 01:26:56 -- common/autotest_common.sh@10 -- # set +x 00:05:43.766 [2024-07-23 01:26:56.605962] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:43.766 [2024-07-23 01:26:56.606047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3653875 ] 00:05:43.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.766 [2024-07-23 01:26:56.664186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.766 [2024-07-23 01:26:56.745646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.766 [2024-07-23 01:26:56.745798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.766 [2024-07-23 01:26:56.745815] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3653875' to capture a snapshot of events at runtime. 00:05:43.766 [2024-07-23 01:26:56.745828] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3653875 for offline analysis/debug. 00:05:43.766 [2024-07-23 01:26:56.745857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.701 01:26:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.701 01:26:57 -- common/autotest_common.sh@852 -- # return 0 00:05:44.701 01:26:57 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.701 01:26:57 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:44.701 01:26:57 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.701 01:26:57 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.701 01:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.701 01:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 ************************************ 00:05:44.701 START TEST rpc_integrity 00:05:44.701 ************************************ 00:05:44.701 01:26:57 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:44.701 01:26:57 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.701 01:26:57 -- rpc/rpc.sh@13 -- # jq length 00:05:44.701 01:26:57 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.701 01:26:57 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.701 01:26:57 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.701 { 00:05:44.701 "name": "Malloc0", 00:05:44.701 "aliases": [ 00:05:44.701 "489f2ff8-1543-43ec-93a3-38315e9d3d95" 00:05:44.701 ], 00:05:44.701 "product_name": "Malloc disk", 00:05:44.701 "block_size": 512, 00:05:44.701 "num_blocks": 16384, 00:05:44.701 "uuid": "489f2ff8-1543-43ec-93a3-38315e9d3d95", 00:05:44.701 "assigned_rate_limits": { 00:05:44.701 "rw_ios_per_sec": 0, 00:05:44.701 "rw_mbytes_per_sec": 0, 00:05:44.701 "r_mbytes_per_sec": 0, 00:05:44.701 "w_mbytes_per_sec": 0 00:05:44.701 }, 00:05:44.701 "claimed": false, 00:05:44.701 "zoned": false, 00:05:44.701 "supported_io_types": { 00:05:44.701 "read": true, 00:05:44.701 "write": true, 00:05:44.701 "unmap": true, 00:05:44.701 "write_zeroes": true, 00:05:44.701 "flush": true, 00:05:44.701 "reset": true, 00:05:44.701 "compare": false, 00:05:44.701 "compare_and_write": false, 00:05:44.701 "abort": true, 00:05:44.701 "nvme_admin": false, 00:05:44.701 "nvme_io": false 00:05:44.701 }, 00:05:44.701 "memory_domains": [ 00:05:44.701 { 00:05:44.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.701 "dma_device_type": 2 00:05:44.701 } 00:05:44.701 ], 00:05:44.701 "driver_specific": {} 00:05:44.701 } 00:05:44.701 ]' 00:05:44.701 01:26:57 -- rpc/rpc.sh@17 -- # jq length 00:05:44.701 01:26:57 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.701 01:26:57 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 [2024-07-23 01:26:57.673846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.701 [2024-07-23 01:26:57.673916] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.701 [2024-07-23 01:26:57.673943] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d33b0 00:05:44.701 [2024-07-23 01:26:57.673959] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.701 [2024-07-23 01:26:57.675412] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.701 [2024-07-23 01:26:57.675441] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.701 Passthru0 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.701 { 00:05:44.701 "name": "Malloc0", 00:05:44.701 "aliases": [ 00:05:44.701 "489f2ff8-1543-43ec-93a3-38315e9d3d95" 00:05:44.701 ], 00:05:44.701 "product_name": "Malloc disk", 00:05:44.701 "block_size": 512, 00:05:44.701 "num_blocks": 16384, 00:05:44.701 "uuid": "489f2ff8-1543-43ec-93a3-38315e9d3d95", 00:05:44.701 "assigned_rate_limits": { 00:05:44.701 "rw_ios_per_sec": 0, 00:05:44.701 "rw_mbytes_per_sec": 0, 00:05:44.701 "r_mbytes_per_sec": 0, 00:05:44.701 "w_mbytes_per_sec": 0 00:05:44.701 }, 00:05:44.701 "claimed": true, 00:05:44.701 "claim_type": "exclusive_write", 00:05:44.701 "zoned": false, 00:05:44.701 "supported_io_types": { 00:05:44.701 "read": true, 00:05:44.701 "write": true, 00:05:44.701 "unmap": true, 00:05:44.701 "write_zeroes": true, 00:05:44.701 "flush": true, 00:05:44.701 "reset": true, 00:05:44.701 "compare": false, 00:05:44.701 "compare_and_write": false, 00:05:44.701 "abort": true, 00:05:44.701 "nvme_admin": false, 00:05:44.701 "nvme_io": false 00:05:44.701 }, 00:05:44.701 "memory_domains": [ 00:05:44.701 { 00:05:44.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.701 "dma_device_type": 2 00:05:44.701 } 00:05:44.701 ], 00:05:44.701 "driver_specific": {} 00:05:44.701 }, 00:05:44.701 { 00:05:44.701 "name": "Passthru0", 00:05:44.701 "aliases": [ 00:05:44.701 "dcc5591c-b20f-5785-82a9-1ec31762462d" 00:05:44.701 ], 00:05:44.701 "product_name": "passthru", 00:05:44.701 "block_size": 512, 00:05:44.701 "num_blocks": 16384, 00:05:44.701 "uuid": "dcc5591c-b20f-5785-82a9-1ec31762462d", 00:05:44.701 "assigned_rate_limits": { 00:05:44.701 "rw_ios_per_sec": 0, 00:05:44.701 "rw_mbytes_per_sec": 0, 00:05:44.701 "r_mbytes_per_sec": 0, 00:05:44.701 "w_mbytes_per_sec": 0 00:05:44.701 }, 00:05:44.701 "claimed": false, 00:05:44.701 "zoned": false, 00:05:44.701 "supported_io_types": { 00:05:44.701 "read": true, 00:05:44.701 "write": true, 00:05:44.701 "unmap": true, 00:05:44.701 "write_zeroes": true, 00:05:44.701 "flush": true, 00:05:44.701 "reset": true, 00:05:44.701 "compare": false, 00:05:44.701 "compare_and_write": false, 00:05:44.701 "abort": true, 00:05:44.701 "nvme_admin": false, 00:05:44.701 "nvme_io": false 00:05:44.701 }, 00:05:44.701 "memory_domains": [ 00:05:44.701 { 00:05:44.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.701 "dma_device_type": 2 00:05:44.701 } 00:05:44.701 ], 00:05:44.701 "driver_specific": { 00:05:44.701 "passthru": { 00:05:44.701 "name": "Passthru0", 00:05:44.701 "base_bdev_name": "Malloc0" 00:05:44.701 } 00:05:44.701 } 00:05:44.701 } 00:05:44.701 ]' 00:05:44.701 01:26:57 -- rpc/rpc.sh@21 -- # jq length 00:05:44.701 01:26:57 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.701 01:26:57 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.701 01:26:57 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.701 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.701 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.701 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.702 01:26:57 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.702 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.702 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.702 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.702 01:26:57 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.702 01:26:57 -- rpc/rpc.sh@26 -- # jq length 00:05:44.702 01:26:57 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.702 00:05:44.702 real 0m0.230s 00:05:44.702 user 0m0.146s 00:05:44.702 sys 0m0.023s 00:05:44.702 01:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.702 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.702 ************************************ 00:05:44.702 END TEST rpc_integrity 00:05:44.702 ************************************ 00:05:44.960 01:26:57 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.960 01:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.960 01:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.960 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 ************************************ 00:05:44.960 START TEST rpc_plugins 00:05:44.960 ************************************ 00:05:44.960 01:26:57 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:44.960 01:26:57 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.960 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.960 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.960 01:26:57 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.960 01:26:57 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.960 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.960 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.960 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.960 01:26:57 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:44.960 { 00:05:44.960 "name": "Malloc1", 00:05:44.960 "aliases": [ 00:05:44.960 "6b809f9d-b5e3-4fff-b547-14c48369c5c8" 00:05:44.960 ], 00:05:44.960 "product_name": "Malloc disk", 00:05:44.960 "block_size": 4096, 00:05:44.960 "num_blocks": 256, 00:05:44.961 "uuid": "6b809f9d-b5e3-4fff-b547-14c48369c5c8", 00:05:44.961 "assigned_rate_limits": { 00:05:44.961 "rw_ios_per_sec": 0, 00:05:44.961 "rw_mbytes_per_sec": 0, 00:05:44.961 "r_mbytes_per_sec": 0, 00:05:44.961 "w_mbytes_per_sec": 0 00:05:44.961 }, 00:05:44.961 "claimed": false, 00:05:44.961 "zoned": false, 00:05:44.961 "supported_io_types": { 00:05:44.961 "read": true, 00:05:44.961 "write": true, 00:05:44.961 "unmap": true, 00:05:44.961 "write_zeroes": true, 00:05:44.961 "flush": true, 00:05:44.961 "reset": true, 00:05:44.961 "compare": false, 00:05:44.961 "compare_and_write": false, 00:05:44.961 "abort": true, 00:05:44.961 "nvme_admin": false, 00:05:44.961 "nvme_io": false 00:05:44.961 }, 00:05:44.961 "memory_domains": [ 00:05:44.961 { 00:05:44.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.961 "dma_device_type": 2 00:05:44.961 } 00:05:44.961 ], 00:05:44.961 "driver_specific": {} 00:05:44.961 } 00:05:44.961 ]' 00:05:44.961 01:26:57 -- rpc/rpc.sh@32 -- # jq length 00:05:44.961 01:26:57 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:44.961 01:26:57 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:44.961 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.961 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.961 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.961 01:26:57 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:44.961 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.961 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.961 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.961 01:26:57 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:44.961 01:26:57 -- rpc/rpc.sh@36 -- # jq length 00:05:44.961 01:26:57 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:44.961 00:05:44.961 real 0m0.111s 00:05:44.961 user 0m0.075s 00:05:44.961 sys 0m0.008s 00:05:44.961 01:26:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.961 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.961 ************************************ 00:05:44.961 END TEST rpc_plugins 00:05:44.961 ************************************ 00:05:44.961 01:26:57 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:44.961 01:26:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.961 01:26:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.961 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.961 ************************************ 00:05:44.961 START TEST rpc_trace_cmd_test 00:05:44.961 ************************************ 00:05:44.961 01:26:57 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:44.961 01:26:57 -- rpc/rpc.sh@40 -- # local info 00:05:44.961 01:26:57 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:44.961 01:26:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:44.961 01:26:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.961 01:26:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:44.961 01:26:57 -- rpc/rpc.sh@42 -- # info='{ 00:05:44.961 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3653875", 00:05:44.961 "tpoint_group_mask": "0x8", 00:05:44.961 "iscsi_conn": { 00:05:44.961 "mask": "0x2", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "scsi": { 00:05:44.961 "mask": "0x4", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "bdev": { 00:05:44.961 "mask": "0x8", 00:05:44.961 "tpoint_mask": "0xffffffffffffffff" 00:05:44.961 }, 00:05:44.961 "nvmf_rdma": { 00:05:44.961 "mask": "0x10", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "nvmf_tcp": { 00:05:44.961 "mask": "0x20", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "ftl": { 00:05:44.961 "mask": "0x40", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "blobfs": { 00:05:44.961 "mask": "0x80", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "dsa": { 00:05:44.961 "mask": "0x200", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "thread": { 00:05:44.961 "mask": "0x400", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "nvme_pcie": { 00:05:44.961 "mask": "0x800", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "iaa": { 00:05:44.961 "mask": "0x1000", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "nvme_tcp": { 00:05:44.961 "mask": "0x2000", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 }, 00:05:44.961 "bdev_nvme": { 00:05:44.961 "mask": "0x4000", 00:05:44.961 "tpoint_mask": "0x0" 00:05:44.961 } 00:05:44.961 }' 00:05:44.961 01:26:57 -- rpc/rpc.sh@43 -- # jq length 00:05:44.961 01:26:58 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:44.961 01:26:58 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:44.961 01:26:58 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:44.961 01:26:58 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.219 01:26:58 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.219 01:26:58 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.219 01:26:58 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.219 01:26:58 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.219 01:26:58 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.219 00:05:45.219 real 0m0.196s 00:05:45.219 user 0m0.173s 00:05:45.219 sys 0m0.015s 00:05:45.219 01:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.219 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.219 ************************************ 00:05:45.219 END TEST rpc_trace_cmd_test 00:05:45.219 ************************************ 00:05:45.219 01:26:58 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.219 01:26:58 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.219 01:26:58 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.219 01:26:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.219 01:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.219 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.219 ************************************ 00:05:45.219 START TEST rpc_daemon_integrity 00:05:45.219 ************************************ 00:05:45.219 01:26:58 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:45.219 01:26:58 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.219 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.219 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.219 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.219 01:26:58 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.219 01:26:58 -- rpc/rpc.sh@13 -- # jq length 00:05:45.219 01:26:58 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.219 01:26:58 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.219 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.219 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.219 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.219 01:26:58 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.219 01:26:58 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.219 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.219 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.219 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.219 01:26:58 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.219 { 00:05:45.219 "name": "Malloc2", 00:05:45.219 "aliases": [ 00:05:45.219 "da7b4a66-133b-4ca9-ab97-644938695e83" 00:05:45.219 ], 00:05:45.219 "product_name": "Malloc disk", 00:05:45.219 "block_size": 512, 00:05:45.219 "num_blocks": 16384, 00:05:45.219 "uuid": "da7b4a66-133b-4ca9-ab97-644938695e83", 00:05:45.219 "assigned_rate_limits": { 00:05:45.219 "rw_ios_per_sec": 0, 00:05:45.219 "rw_mbytes_per_sec": 0, 00:05:45.219 "r_mbytes_per_sec": 0, 00:05:45.219 "w_mbytes_per_sec": 0 00:05:45.219 }, 00:05:45.219 "claimed": false, 00:05:45.219 "zoned": false, 00:05:45.219 "supported_io_types": { 00:05:45.219 "read": true, 00:05:45.219 "write": true, 00:05:45.219 "unmap": true, 00:05:45.219 "write_zeroes": true, 00:05:45.219 "flush": true, 00:05:45.219 "reset": true, 00:05:45.219 "compare": false, 00:05:45.219 "compare_and_write": false, 00:05:45.219 "abort": true, 00:05:45.219 "nvme_admin": false, 00:05:45.219 "nvme_io": false 00:05:45.219 }, 00:05:45.219 "memory_domains": [ 00:05:45.219 { 00:05:45.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.219 "dma_device_type": 2 00:05:45.220 } 00:05:45.220 ], 00:05:45.220 "driver_specific": {} 00:05:45.220 } 00:05:45.220 ]' 00:05:45.220 01:26:58 -- rpc/rpc.sh@17 -- # jq length 00:05:45.220 01:26:58 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.220 01:26:58 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.220 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.220 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.220 [2024-07-23 01:26:58.271571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.220 [2024-07-23 01:26:58.271628] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.220 [2024-07-23 01:26:58.271686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x24d3020 00:05:45.220 [2024-07-23 01:26:58.271702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.220 [2024-07-23 01:26:58.273063] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.220 [2024-07-23 01:26:58.273093] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.220 Passthru0 00:05:45.220 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.220 01:26:58 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.220 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.220 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.220 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.220 01:26:58 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.220 { 00:05:45.220 "name": "Malloc2", 00:05:45.220 "aliases": [ 00:05:45.220 "da7b4a66-133b-4ca9-ab97-644938695e83" 00:05:45.220 ], 00:05:45.220 "product_name": "Malloc disk", 00:05:45.220 "block_size": 512, 00:05:45.220 "num_blocks": 16384, 00:05:45.220 "uuid": "da7b4a66-133b-4ca9-ab97-644938695e83", 00:05:45.220 "assigned_rate_limits": { 00:05:45.220 "rw_ios_per_sec": 0, 00:05:45.220 "rw_mbytes_per_sec": 0, 00:05:45.220 "r_mbytes_per_sec": 0, 00:05:45.220 "w_mbytes_per_sec": 0 00:05:45.220 }, 00:05:45.220 "claimed": true, 00:05:45.220 "claim_type": "exclusive_write", 00:05:45.220 "zoned": false, 00:05:45.220 "supported_io_types": { 00:05:45.220 "read": true, 00:05:45.220 "write": true, 00:05:45.220 "unmap": true, 00:05:45.220 "write_zeroes": true, 00:05:45.220 "flush": true, 00:05:45.220 "reset": true, 00:05:45.220 "compare": false, 00:05:45.220 "compare_and_write": false, 00:05:45.220 "abort": true, 00:05:45.220 "nvme_admin": false, 00:05:45.220 "nvme_io": false 00:05:45.220 }, 00:05:45.220 "memory_domains": [ 00:05:45.220 { 00:05:45.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.220 "dma_device_type": 2 00:05:45.220 } 00:05:45.220 ], 00:05:45.220 "driver_specific": {} 00:05:45.220 }, 00:05:45.220 { 00:05:45.220 "name": "Passthru0", 00:05:45.220 "aliases": [ 00:05:45.220 "8bebd5df-1896-50c1-8e21-913faeacd94a" 00:05:45.220 ], 00:05:45.220 "product_name": "passthru", 00:05:45.220 "block_size": 512, 00:05:45.220 "num_blocks": 16384, 00:05:45.220 "uuid": "8bebd5df-1896-50c1-8e21-913faeacd94a", 00:05:45.220 "assigned_rate_limits": { 00:05:45.220 "rw_ios_per_sec": 0, 00:05:45.220 "rw_mbytes_per_sec": 0, 00:05:45.220 "r_mbytes_per_sec": 0, 00:05:45.220 "w_mbytes_per_sec": 0 00:05:45.220 }, 00:05:45.220 "claimed": false, 00:05:45.220 "zoned": false, 00:05:45.220 "supported_io_types": { 00:05:45.220 "read": true, 00:05:45.220 "write": true, 00:05:45.220 "unmap": true, 00:05:45.220 "write_zeroes": true, 00:05:45.220 "flush": true, 00:05:45.220 "reset": true, 00:05:45.220 "compare": false, 00:05:45.220 "compare_and_write": false, 00:05:45.220 "abort": true, 00:05:45.220 "nvme_admin": false, 00:05:45.220 "nvme_io": false 00:05:45.220 }, 00:05:45.220 "memory_domains": [ 00:05:45.220 { 00:05:45.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.220 "dma_device_type": 2 00:05:45.220 } 00:05:45.220 ], 00:05:45.220 "driver_specific": { 00:05:45.220 "passthru": { 00:05:45.220 "name": "Passthru0", 00:05:45.220 "base_bdev_name": "Malloc2" 00:05:45.220 } 00:05:45.220 } 00:05:45.220 } 00:05:45.220 ]' 00:05:45.220 01:26:58 -- rpc/rpc.sh@21 -- # jq length 00:05:45.478 01:26:58 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.478 01:26:58 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.478 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.478 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.478 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.478 01:26:58 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.478 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.478 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.478 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.478 01:26:58 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.478 01:26:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:45.478 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.478 01:26:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:45.478 01:26:58 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.478 01:26:58 -- rpc/rpc.sh@26 -- # jq length 00:05:45.478 01:26:58 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.478 00:05:45.478 real 0m0.214s 00:05:45.478 user 0m0.139s 00:05:45.478 sys 0m0.024s 00:05:45.478 01:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.478 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.478 ************************************ 00:05:45.478 END TEST rpc_daemon_integrity 00:05:45.478 ************************************ 00:05:45.478 01:26:58 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.478 01:26:58 -- rpc/rpc.sh@84 -- # killprocess 3653875 00:05:45.478 01:26:58 -- common/autotest_common.sh@926 -- # '[' -z 3653875 ']' 00:05:45.478 01:26:58 -- common/autotest_common.sh@930 -- # kill -0 3653875 00:05:45.478 01:26:58 -- common/autotest_common.sh@931 -- # uname 00:05:45.478 01:26:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.478 01:26:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3653875 00:05:45.478 01:26:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.478 01:26:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.478 01:26:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3653875' 00:05:45.478 killing process with pid 3653875 00:05:45.478 01:26:58 -- common/autotest_common.sh@945 -- # kill 3653875 00:05:45.478 01:26:58 -- common/autotest_common.sh@950 -- # wait 3653875 00:05:45.737 00:05:45.737 real 0m2.312s 00:05:45.737 user 0m2.949s 00:05:45.737 sys 0m0.581s 00:05:45.737 01:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.737 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.737 ************************************ 00:05:45.737 END TEST rpc 00:05:45.737 ************************************ 00:05:45.996 01:26:58 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.996 01:26:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.996 01:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.996 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 START TEST rpc_client 00:05:45.996 ************************************ 00:05:45.996 01:26:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:45.996 * Looking for test storage... 00:05:45.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:45.996 01:26:58 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:45.996 OK 00:05:45.996 01:26:58 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:45.996 00:05:45.996 real 0m0.062s 00:05:45.996 user 0m0.027s 00:05:45.996 sys 0m0.040s 00:05:45.996 01:26:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.996 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 END TEST rpc_client 00:05:45.996 ************************************ 00:05:45.996 01:26:58 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.996 01:26:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.996 01:26:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.996 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.996 ************************************ 00:05:45.996 START TEST json_config 00:05:45.996 ************************************ 00:05:45.996 01:26:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:45.996 01:26:58 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.996 01:26:58 -- nvmf/common.sh@7 -- # uname -s 00:05:45.996 01:26:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.996 01:26:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.996 01:26:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.996 01:26:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.996 01:26:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.996 01:26:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.996 01:26:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.996 01:26:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.996 01:26:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.996 01:26:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.996 01:26:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.996 01:26:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:45.996 01:26:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.996 01:26:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.996 01:26:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.996 01:26:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.996 01:26:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.996 01:26:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.996 01:26:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.996 01:26:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 01:26:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 01:26:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 01:26:58 -- paths/export.sh@5 -- # export PATH 00:05:45.997 01:26:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.997 01:26:58 -- nvmf/common.sh@46 -- # : 0 00:05:45.997 01:26:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:45.997 01:26:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:45.997 01:26:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:45.997 01:26:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.997 01:26:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.997 01:26:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:45.997 01:26:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:45.997 01:26:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:45.997 01:26:58 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:45.997 01:26:58 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:45.997 01:26:58 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:45.997 01:26:58 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:45.997 01:26:58 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:45.997 01:26:58 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:45.997 01:26:58 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:45.997 01:26:58 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:45.997 01:26:58 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:45.997 01:26:58 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:45.997 01:26:58 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.997 01:26:58 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:45.997 INFO: JSON configuration test init 00:05:45.997 01:26:58 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:45.997 01:26:58 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:45.997 01:26:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.997 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.997 01:26:58 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:45.997 01:26:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.997 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.997 01:26:58 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:45.997 01:26:58 -- json_config/json_config.sh@98 -- # local app=target 00:05:45.997 01:26:58 -- json_config/json_config.sh@99 -- # shift 00:05:45.997 01:26:58 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:45.997 01:26:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:45.997 01:26:58 -- json_config/json_config.sh@111 -- # app_pid[$app]=3654353 00:05:45.997 01:26:58 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:45.997 01:26:58 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:45.997 Waiting for target to run... 00:05:45.997 01:26:58 -- json_config/json_config.sh@114 -- # waitforlisten 3654353 /var/tmp/spdk_tgt.sock 00:05:45.997 01:26:58 -- common/autotest_common.sh@819 -- # '[' -z 3654353 ']' 00:05:45.997 01:26:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.997 01:26:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.997 01:26:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.997 01:26:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.997 01:26:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.997 [2024-07-23 01:26:59.024008] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.997 [2024-07-23 01:26:59.024110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3654353 ] 00:05:45.997 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.563 [2024-07-23 01:26:59.502749] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.563 [2024-07-23 01:26:59.580483] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.563 [2024-07-23 01:26:59.580687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.129 01:26:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.129 01:26:59 -- common/autotest_common.sh@852 -- # return 0 00:05:47.129 01:26:59 -- json_config/json_config.sh@115 -- # echo '' 00:05:47.129 00:05:47.129 01:26:59 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:47.129 01:26:59 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:47.129 01:26:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.129 01:26:59 -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 01:26:59 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:47.129 01:26:59 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:47.129 01:26:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.129 01:26:59 -- common/autotest_common.sh@10 -- # set +x 00:05:47.129 01:27:00 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:47.129 01:27:00 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:47.129 01:27:00 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:50.409 01:27:03 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:50.409 01:27:03 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:50.409 01:27:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.409 01:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.409 01:27:03 -- json_config/json_config.sh@48 -- # local ret=0 00:05:50.409 01:27:03 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:50.409 01:27:03 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:50.409 01:27:03 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:50.409 01:27:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:50.409 01:27:03 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:50.409 01:27:03 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:50.409 01:27:03 -- json_config/json_config.sh@51 -- # local get_types 00:05:50.409 01:27:03 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:50.409 01:27:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.409 01:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.409 01:27:03 -- json_config/json_config.sh@58 -- # return 0 00:05:50.409 01:27:03 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:50.409 01:27:03 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:50.409 01:27:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.409 01:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:50.409 01:27:03 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:50.409 01:27:03 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:50.409 01:27:03 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.409 01:27:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:50.666 MallocForNvmf0 00:05:50.666 01:27:03 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.666 01:27:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:50.924 MallocForNvmf1 00:05:50.924 01:27:03 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:50.924 01:27:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:51.182 [2024-07-23 01:27:04.148154] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:51.182 01:27:04 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.182 01:27:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:51.440 01:27:04 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.440 01:27:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:51.698 01:27:04 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.698 01:27:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:51.955 01:27:04 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:51.956 01:27:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:52.213 [2024-07-23 01:27:05.099281] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:52.213 01:27:05 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:52.213 01:27:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.213 01:27:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.213 01:27:05 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:52.213 01:27:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.213 01:27:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.213 01:27:05 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:52.213 01:27:05 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.213 01:27:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:52.471 MallocBdevForConfigChangeCheck 00:05:52.471 01:27:05 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:52.471 01:27:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.471 01:27:05 -- common/autotest_common.sh@10 -- # set +x 00:05:52.471 01:27:05 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:52.471 01:27:05 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:52.729 01:27:05 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:52.729 INFO: shutting down applications... 00:05:52.729 01:27:05 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:52.729 01:27:05 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:52.729 01:27:05 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:52.729 01:27:05 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:54.627 Calling clear_iscsi_subsystem 00:05:54.627 Calling clear_nvmf_subsystem 00:05:54.627 Calling clear_nbd_subsystem 00:05:54.627 Calling clear_ublk_subsystem 00:05:54.627 Calling clear_vhost_blk_subsystem 00:05:54.627 Calling clear_vhost_scsi_subsystem 00:05:54.627 Calling clear_scheduler_subsystem 00:05:54.627 Calling clear_bdev_subsystem 00:05:54.627 Calling clear_accel_subsystem 00:05:54.627 Calling clear_vmd_subsystem 00:05:54.627 Calling clear_sock_subsystem 00:05:54.627 Calling clear_iobuf_subsystem 00:05:54.627 01:27:07 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:54.627 01:27:07 -- json_config/json_config.sh@396 -- # count=100 00:05:54.627 01:27:07 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:54.627 01:27:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.627 01:27:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:54.627 01:27:07 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:54.885 01:27:07 -- json_config/json_config.sh@398 -- # break 00:05:54.885 01:27:07 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:54.886 01:27:07 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:54.886 01:27:07 -- json_config/json_config.sh@120 -- # local app=target 00:05:54.886 01:27:07 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:54.886 01:27:07 -- json_config/json_config.sh@124 -- # [[ -n 3654353 ]] 00:05:54.886 01:27:07 -- json_config/json_config.sh@127 -- # kill -SIGINT 3654353 00:05:54.886 01:27:07 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:54.886 01:27:07 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:54.886 01:27:07 -- json_config/json_config.sh@130 -- # kill -0 3654353 00:05:54.886 01:27:07 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:55.453 01:27:08 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:55.453 01:27:08 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:55.453 01:27:08 -- json_config/json_config.sh@130 -- # kill -0 3654353 00:05:55.453 01:27:08 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:55.453 01:27:08 -- json_config/json_config.sh@132 -- # break 00:05:55.453 01:27:08 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:55.453 01:27:08 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:55.453 SPDK target shutdown done 00:05:55.453 01:27:08 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:55.453 INFO: relaunching applications... 00:05:55.453 01:27:08 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.453 01:27:08 -- json_config/json_config.sh@98 -- # local app=target 00:05:55.453 01:27:08 -- json_config/json_config.sh@99 -- # shift 00:05:55.453 01:27:08 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:55.453 01:27:08 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:55.453 01:27:08 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:55.453 01:27:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:55.453 01:27:08 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:55.453 01:27:08 -- json_config/json_config.sh@111 -- # app_pid[$app]=3656051 00:05:55.453 01:27:08 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:55.453 01:27:08 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:55.453 Waiting for target to run... 00:05:55.453 01:27:08 -- json_config/json_config.sh@114 -- # waitforlisten 3656051 /var/tmp/spdk_tgt.sock 00:05:55.453 01:27:08 -- common/autotest_common.sh@819 -- # '[' -z 3656051 ']' 00:05:55.453 01:27:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:55.453 01:27:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.453 01:27:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:55.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:55.453 01:27:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.453 01:27:08 -- common/autotest_common.sh@10 -- # set +x 00:05:55.453 [2024-07-23 01:27:08.337875] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.453 [2024-07-23 01:27:08.337972] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3656051 ] 00:05:55.453 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.020 [2024-07-23 01:27:08.843624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.020 [2024-07-23 01:27:08.919555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.020 [2024-07-23 01:27:08.919773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.302 [2024-07-23 01:27:11.941564] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:59.302 [2024-07-23 01:27:11.974037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:59.302 01:27:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:59.302 01:27:12 -- common/autotest_common.sh@852 -- # return 0 00:05:59.302 01:27:12 -- json_config/json_config.sh@115 -- # echo '' 00:05:59.302 00:05:59.302 01:27:12 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:59.302 01:27:12 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:59.302 INFO: Checking if target configuration is the same... 00:05:59.302 01:27:12 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.302 01:27:12 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:59.302 01:27:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.302 + '[' 2 -ne 2 ']' 00:05:59.302 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.302 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.302 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.302 +++ basename /dev/fd/62 00:05:59.302 ++ mktemp /tmp/62.XXX 00:05:59.302 + tmp_file_1=/tmp/62.ABT 00:05:59.302 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.302 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.302 + tmp_file_2=/tmp/spdk_tgt_config.json.88n 00:05:59.302 + ret=0 00:05:59.302 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.561 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:59.561 + diff -u /tmp/62.ABT /tmp/spdk_tgt_config.json.88n 00:05:59.561 + echo 'INFO: JSON config files are the same' 00:05:59.561 INFO: JSON config files are the same 00:05:59.561 + rm /tmp/62.ABT /tmp/spdk_tgt_config.json.88n 00:05:59.561 + exit 0 00:05:59.561 01:27:12 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:59.561 01:27:12 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:59.561 INFO: changing configuration and checking if this can be detected... 00:05:59.561 01:27:12 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.561 01:27:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:59.820 01:27:12 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.820 01:27:12 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:59.820 01:27:12 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.820 + '[' 2 -ne 2 ']' 00:05:59.820 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:59.820 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:59.820 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:59.820 +++ basename /dev/fd/62 00:05:59.820 ++ mktemp /tmp/62.XXX 00:05:59.820 + tmp_file_1=/tmp/62.NJ0 00:05:59.820 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.820 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:59.820 + tmp_file_2=/tmp/spdk_tgt_config.json.TDw 00:05:59.820 + ret=0 00:05:59.820 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.078 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:00.339 + diff -u /tmp/62.NJ0 /tmp/spdk_tgt_config.json.TDw 00:06:00.339 + ret=1 00:06:00.339 + echo '=== Start of file: /tmp/62.NJ0 ===' 00:06:00.339 + cat /tmp/62.NJ0 00:06:00.339 + echo '=== End of file: /tmp/62.NJ0 ===' 00:06:00.339 + echo '' 00:06:00.339 + echo '=== Start of file: /tmp/spdk_tgt_config.json.TDw ===' 00:06:00.339 + cat /tmp/spdk_tgt_config.json.TDw 00:06:00.339 + echo '=== End of file: /tmp/spdk_tgt_config.json.TDw ===' 00:06:00.339 + echo '' 00:06:00.339 + rm /tmp/62.NJ0 /tmp/spdk_tgt_config.json.TDw 00:06:00.339 + exit 1 00:06:00.339 01:27:13 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:00.339 INFO: configuration change detected. 00:06:00.339 01:27:13 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:00.339 01:27:13 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:00.339 01:27:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.339 01:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.339 01:27:13 -- json_config/json_config.sh@360 -- # local ret=0 00:06:00.339 01:27:13 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:00.339 01:27:13 -- json_config/json_config.sh@370 -- # [[ -n 3656051 ]] 00:06:00.339 01:27:13 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:00.339 01:27:13 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:00.339 01:27:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:00.339 01:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.339 01:27:13 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:00.339 01:27:13 -- json_config/json_config.sh@246 -- # uname -s 00:06:00.339 01:27:13 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:00.339 01:27:13 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:00.339 01:27:13 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:00.339 01:27:13 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:00.339 01:27:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:00.339 01:27:13 -- common/autotest_common.sh@10 -- # set +x 00:06:00.339 01:27:13 -- json_config/json_config.sh@376 -- # killprocess 3656051 00:06:00.339 01:27:13 -- common/autotest_common.sh@926 -- # '[' -z 3656051 ']' 00:06:00.339 01:27:13 -- common/autotest_common.sh@930 -- # kill -0 3656051 00:06:00.339 01:27:13 -- common/autotest_common.sh@931 -- # uname 00:06:00.339 01:27:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.339 01:27:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3656051 00:06:00.339 01:27:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.339 01:27:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.339 01:27:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3656051' 00:06:00.339 killing process with pid 3656051 00:06:00.339 01:27:13 -- common/autotest_common.sh@945 -- # kill 3656051 00:06:00.339 01:27:13 -- common/autotest_common.sh@950 -- # wait 3656051 00:06:02.302 01:27:14 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:02.302 01:27:14 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:02.302 01:27:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:02.302 01:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.302 01:27:14 -- json_config/json_config.sh@381 -- # return 0 00:06:02.302 01:27:14 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:02.302 INFO: Success 00:06:02.302 00:06:02.302 real 0m15.984s 00:06:02.302 user 0m18.047s 00:06:02.302 sys 0m2.254s 00:06:02.302 01:27:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.302 01:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.302 ************************************ 00:06:02.302 END TEST json_config 00:06:02.302 ************************************ 00:06:02.302 01:27:14 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.302 01:27:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.302 01:27:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.302 01:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.302 ************************************ 00:06:02.302 START TEST json_config_extra_key 00:06:02.302 ************************************ 00:06:02.302 01:27:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:02.302 01:27:14 -- nvmf/common.sh@7 -- # uname -s 00:06:02.302 01:27:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:02.302 01:27:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:02.302 01:27:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:02.302 01:27:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:02.302 01:27:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:02.302 01:27:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:02.302 01:27:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:02.302 01:27:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:02.302 01:27:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:02.302 01:27:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:02.302 01:27:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.302 01:27:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:02.302 01:27:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:02.302 01:27:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:02.302 01:27:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:02.302 01:27:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:02.302 01:27:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:02.302 01:27:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:02.302 01:27:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:02.302 01:27:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.302 01:27:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.302 01:27:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.302 01:27:14 -- paths/export.sh@5 -- # export PATH 00:06:02.302 01:27:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:02.302 01:27:14 -- nvmf/common.sh@46 -- # : 0 00:06:02.302 01:27:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:02.302 01:27:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:02.302 01:27:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:02.302 01:27:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:02.302 01:27:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:02.302 01:27:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:02.302 01:27:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:02.302 01:27:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:02.302 01:27:14 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:02.302 INFO: launching applications... 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3657138 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:02.303 Waiting for target to run... 00:06:02.303 01:27:14 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3657138 /var/tmp/spdk_tgt.sock 00:06:02.303 01:27:14 -- common/autotest_common.sh@819 -- # '[' -z 3657138 ']' 00:06:02.303 01:27:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:02.303 01:27:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:02.303 01:27:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:02.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:02.303 01:27:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:02.303 01:27:14 -- common/autotest_common.sh@10 -- # set +x 00:06:02.303 [2024-07-23 01:27:15.034149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:02.303 [2024-07-23 01:27:15.034230] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657138 ] 00:06:02.303 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.303 [2024-07-23 01:27:15.375550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.561 [2024-07-23 01:27:15.439533] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:02.561 [2024-07-23 01:27:15.439731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.125 01:27:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:03.125 01:27:15 -- common/autotest_common.sh@852 -- # return 0 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:03.125 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:03.125 INFO: shutting down applications... 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:03.125 01:27:15 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3657138 ]] 00:06:03.126 01:27:15 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3657138 00:06:03.126 01:27:15 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:03.126 01:27:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.126 01:27:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3657138 00:06:03.126 01:27:15 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3657138 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:03.384 SPDK target shutdown done 00:06:03.384 01:27:16 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:03.384 Success 00:06:03.384 00:06:03.384 real 0m1.505s 00:06:03.384 user 0m1.432s 00:06:03.384 sys 0m0.419s 00:06:03.384 01:27:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.384 01:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.384 ************************************ 00:06:03.384 END TEST json_config_extra_key 00:06:03.384 ************************************ 00:06:03.384 01:27:16 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.384 01:27:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:03.384 01:27:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.384 01:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.384 ************************************ 00:06:03.384 START TEST alias_rpc 00:06:03.384 ************************************ 00:06:03.384 01:27:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:03.642 * Looking for test storage... 00:06:03.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:03.642 01:27:16 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:03.642 01:27:16 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3657393 00:06:03.642 01:27:16 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:03.642 01:27:16 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3657393 00:06:03.642 01:27:16 -- common/autotest_common.sh@819 -- # '[' -z 3657393 ']' 00:06:03.642 01:27:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.642 01:27:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:03.642 01:27:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.642 01:27:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:03.642 01:27:16 -- common/autotest_common.sh@10 -- # set +x 00:06:03.642 [2024-07-23 01:27:16.568373] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.642 [2024-07-23 01:27:16.568470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657393 ] 00:06:03.642 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.642 [2024-07-23 01:27:16.626929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.642 [2024-07-23 01:27:16.708343] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:03.642 [2024-07-23 01:27:16.708517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.574 01:27:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.574 01:27:17 -- common/autotest_common.sh@852 -- # return 0 00:06:04.574 01:27:17 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:04.831 01:27:17 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3657393 00:06:04.831 01:27:17 -- common/autotest_common.sh@926 -- # '[' -z 3657393 ']' 00:06:04.831 01:27:17 -- common/autotest_common.sh@930 -- # kill -0 3657393 00:06:04.831 01:27:17 -- common/autotest_common.sh@931 -- # uname 00:06:04.831 01:27:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.831 01:27:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3657393 00:06:04.831 01:27:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.831 01:27:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.831 01:27:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3657393' 00:06:04.831 killing process with pid 3657393 00:06:04.831 01:27:17 -- common/autotest_common.sh@945 -- # kill 3657393 00:06:04.831 01:27:17 -- common/autotest_common.sh@950 -- # wait 3657393 00:06:05.089 00:06:05.089 real 0m1.705s 00:06:05.089 user 0m1.950s 00:06:05.089 sys 0m0.457s 00:06:05.089 01:27:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.089 01:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.089 ************************************ 00:06:05.089 END TEST alias_rpc 00:06:05.089 ************************************ 00:06:05.348 01:27:18 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:05.348 01:27:18 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.348 01:27:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.348 01:27:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.348 01:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 ************************************ 00:06:05.348 START TEST spdkcli_tcp 00:06:05.348 ************************************ 00:06:05.348 01:27:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:05.348 * Looking for test storage... 00:06:05.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:05.348 01:27:18 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:05.348 01:27:18 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:05.348 01:27:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:05.348 01:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3657641 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:05.348 01:27:18 -- spdkcli/tcp.sh@27 -- # waitforlisten 3657641 00:06:05.348 01:27:18 -- common/autotest_common.sh@819 -- # '[' -z 3657641 ']' 00:06:05.348 01:27:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.348 01:27:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.348 01:27:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.348 01:27:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.348 01:27:18 -- common/autotest_common.sh@10 -- # set +x 00:06:05.348 [2024-07-23 01:27:18.306953] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.348 [2024-07-23 01:27:18.307033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657641 ] 00:06:05.348 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.348 [2024-07-23 01:27:18.366148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.606 [2024-07-23 01:27:18.450580] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.606 [2024-07-23 01:27:18.450782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.606 [2024-07-23 01:27:18.450788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.172 01:27:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.172 01:27:19 -- common/autotest_common.sh@852 -- # return 0 00:06:06.172 01:27:19 -- spdkcli/tcp.sh@31 -- # socat_pid=3657783 00:06:06.172 01:27:19 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:06.172 01:27:19 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:06.430 [ 00:06:06.430 "bdev_malloc_delete", 00:06:06.430 "bdev_malloc_create", 00:06:06.430 "bdev_null_resize", 00:06:06.430 "bdev_null_delete", 00:06:06.430 "bdev_null_create", 00:06:06.430 "bdev_nvme_cuse_unregister", 00:06:06.430 "bdev_nvme_cuse_register", 00:06:06.430 "bdev_opal_new_user", 00:06:06.430 "bdev_opal_set_lock_state", 00:06:06.430 "bdev_opal_delete", 00:06:06.430 "bdev_opal_get_info", 00:06:06.430 "bdev_opal_create", 00:06:06.430 "bdev_nvme_opal_revert", 00:06:06.430 "bdev_nvme_opal_init", 00:06:06.430 "bdev_nvme_send_cmd", 00:06:06.430 "bdev_nvme_get_path_iostat", 00:06:06.430 "bdev_nvme_get_mdns_discovery_info", 00:06:06.430 "bdev_nvme_stop_mdns_discovery", 00:06:06.430 "bdev_nvme_start_mdns_discovery", 00:06:06.430 "bdev_nvme_set_multipath_policy", 00:06:06.430 "bdev_nvme_set_preferred_path", 00:06:06.430 "bdev_nvme_get_io_paths", 00:06:06.430 "bdev_nvme_remove_error_injection", 00:06:06.430 "bdev_nvme_add_error_injection", 00:06:06.430 "bdev_nvme_get_discovery_info", 00:06:06.430 "bdev_nvme_stop_discovery", 00:06:06.430 "bdev_nvme_start_discovery", 00:06:06.430 "bdev_nvme_get_controller_health_info", 00:06:06.430 "bdev_nvme_disable_controller", 00:06:06.430 "bdev_nvme_enable_controller", 00:06:06.430 "bdev_nvme_reset_controller", 00:06:06.430 "bdev_nvme_get_transport_statistics", 00:06:06.430 "bdev_nvme_apply_firmware", 00:06:06.430 "bdev_nvme_detach_controller", 00:06:06.430 "bdev_nvme_get_controllers", 00:06:06.430 "bdev_nvme_attach_controller", 00:06:06.430 "bdev_nvme_set_hotplug", 00:06:06.430 "bdev_nvme_set_options", 00:06:06.430 "bdev_passthru_delete", 00:06:06.430 "bdev_passthru_create", 00:06:06.430 "bdev_lvol_grow_lvstore", 00:06:06.430 "bdev_lvol_get_lvols", 00:06:06.430 "bdev_lvol_get_lvstores", 00:06:06.430 "bdev_lvol_delete", 00:06:06.430 "bdev_lvol_set_read_only", 00:06:06.430 "bdev_lvol_resize", 00:06:06.430 "bdev_lvol_decouple_parent", 00:06:06.430 "bdev_lvol_inflate", 00:06:06.430 "bdev_lvol_rename", 00:06:06.430 "bdev_lvol_clone_bdev", 00:06:06.430 "bdev_lvol_clone", 00:06:06.430 "bdev_lvol_snapshot", 00:06:06.430 "bdev_lvol_create", 00:06:06.430 "bdev_lvol_delete_lvstore", 00:06:06.430 "bdev_lvol_rename_lvstore", 00:06:06.430 "bdev_lvol_create_lvstore", 00:06:06.430 "bdev_raid_set_options", 00:06:06.430 "bdev_raid_remove_base_bdev", 00:06:06.430 "bdev_raid_add_base_bdev", 00:06:06.431 "bdev_raid_delete", 00:06:06.431 "bdev_raid_create", 00:06:06.431 "bdev_raid_get_bdevs", 00:06:06.431 "bdev_error_inject_error", 00:06:06.431 "bdev_error_delete", 00:06:06.431 "bdev_error_create", 00:06:06.431 "bdev_split_delete", 00:06:06.431 "bdev_split_create", 00:06:06.431 "bdev_delay_delete", 00:06:06.431 "bdev_delay_create", 00:06:06.431 "bdev_delay_update_latency", 00:06:06.431 "bdev_zone_block_delete", 00:06:06.431 "bdev_zone_block_create", 00:06:06.431 "blobfs_create", 00:06:06.431 "blobfs_detect", 00:06:06.431 "blobfs_set_cache_size", 00:06:06.431 "bdev_aio_delete", 00:06:06.431 "bdev_aio_rescan", 00:06:06.431 "bdev_aio_create", 00:06:06.431 "bdev_ftl_set_property", 00:06:06.431 "bdev_ftl_get_properties", 00:06:06.431 "bdev_ftl_get_stats", 00:06:06.431 "bdev_ftl_unmap", 00:06:06.431 "bdev_ftl_unload", 00:06:06.431 "bdev_ftl_delete", 00:06:06.431 "bdev_ftl_load", 00:06:06.431 "bdev_ftl_create", 00:06:06.431 "bdev_virtio_attach_controller", 00:06:06.431 "bdev_virtio_scsi_get_devices", 00:06:06.431 "bdev_virtio_detach_controller", 00:06:06.431 "bdev_virtio_blk_set_hotplug", 00:06:06.431 "bdev_iscsi_delete", 00:06:06.431 "bdev_iscsi_create", 00:06:06.431 "bdev_iscsi_set_options", 00:06:06.431 "accel_error_inject_error", 00:06:06.431 "ioat_scan_accel_module", 00:06:06.431 "dsa_scan_accel_module", 00:06:06.431 "iaa_scan_accel_module", 00:06:06.431 "vfu_virtio_create_scsi_endpoint", 00:06:06.431 "vfu_virtio_scsi_remove_target", 00:06:06.431 "vfu_virtio_scsi_add_target", 00:06:06.431 "vfu_virtio_create_blk_endpoint", 00:06:06.431 "vfu_virtio_delete_endpoint", 00:06:06.431 "iscsi_set_options", 00:06:06.431 "iscsi_get_auth_groups", 00:06:06.431 "iscsi_auth_group_remove_secret", 00:06:06.431 "iscsi_auth_group_add_secret", 00:06:06.431 "iscsi_delete_auth_group", 00:06:06.431 "iscsi_create_auth_group", 00:06:06.431 "iscsi_set_discovery_auth", 00:06:06.431 "iscsi_get_options", 00:06:06.431 "iscsi_target_node_request_logout", 00:06:06.431 "iscsi_target_node_set_redirect", 00:06:06.431 "iscsi_target_node_set_auth", 00:06:06.431 "iscsi_target_node_add_lun", 00:06:06.431 "iscsi_get_connections", 00:06:06.431 "iscsi_portal_group_set_auth", 00:06:06.431 "iscsi_start_portal_group", 00:06:06.431 "iscsi_delete_portal_group", 00:06:06.431 "iscsi_create_portal_group", 00:06:06.431 "iscsi_get_portal_groups", 00:06:06.431 "iscsi_delete_target_node", 00:06:06.431 "iscsi_target_node_remove_pg_ig_maps", 00:06:06.431 "iscsi_target_node_add_pg_ig_maps", 00:06:06.431 "iscsi_create_target_node", 00:06:06.431 "iscsi_get_target_nodes", 00:06:06.431 "iscsi_delete_initiator_group", 00:06:06.431 "iscsi_initiator_group_remove_initiators", 00:06:06.431 "iscsi_initiator_group_add_initiators", 00:06:06.431 "iscsi_create_initiator_group", 00:06:06.431 "iscsi_get_initiator_groups", 00:06:06.431 "nvmf_set_crdt", 00:06:06.431 "nvmf_set_config", 00:06:06.431 "nvmf_set_max_subsystems", 00:06:06.431 "nvmf_subsystem_get_listeners", 00:06:06.431 "nvmf_subsystem_get_qpairs", 00:06:06.431 "nvmf_subsystem_get_controllers", 00:06:06.431 "nvmf_get_stats", 00:06:06.431 "nvmf_get_transports", 00:06:06.431 "nvmf_create_transport", 00:06:06.431 "nvmf_get_targets", 00:06:06.431 "nvmf_delete_target", 00:06:06.431 "nvmf_create_target", 00:06:06.431 "nvmf_subsystem_allow_any_host", 00:06:06.431 "nvmf_subsystem_remove_host", 00:06:06.431 "nvmf_subsystem_add_host", 00:06:06.431 "nvmf_subsystem_remove_ns", 00:06:06.431 "nvmf_subsystem_add_ns", 00:06:06.431 "nvmf_subsystem_listener_set_ana_state", 00:06:06.431 "nvmf_discovery_get_referrals", 00:06:06.431 "nvmf_discovery_remove_referral", 00:06:06.431 "nvmf_discovery_add_referral", 00:06:06.431 "nvmf_subsystem_remove_listener", 00:06:06.431 "nvmf_subsystem_add_listener", 00:06:06.431 "nvmf_delete_subsystem", 00:06:06.431 "nvmf_create_subsystem", 00:06:06.431 "nvmf_get_subsystems", 00:06:06.431 "env_dpdk_get_mem_stats", 00:06:06.431 "nbd_get_disks", 00:06:06.431 "nbd_stop_disk", 00:06:06.431 "nbd_start_disk", 00:06:06.431 "ublk_recover_disk", 00:06:06.431 "ublk_get_disks", 00:06:06.431 "ublk_stop_disk", 00:06:06.431 "ublk_start_disk", 00:06:06.431 "ublk_destroy_target", 00:06:06.431 "ublk_create_target", 00:06:06.431 "virtio_blk_create_transport", 00:06:06.431 "virtio_blk_get_transports", 00:06:06.431 "vhost_controller_set_coalescing", 00:06:06.431 "vhost_get_controllers", 00:06:06.431 "vhost_delete_controller", 00:06:06.431 "vhost_create_blk_controller", 00:06:06.431 "vhost_scsi_controller_remove_target", 00:06:06.431 "vhost_scsi_controller_add_target", 00:06:06.431 "vhost_start_scsi_controller", 00:06:06.431 "vhost_create_scsi_controller", 00:06:06.431 "thread_set_cpumask", 00:06:06.431 "framework_get_scheduler", 00:06:06.431 "framework_set_scheduler", 00:06:06.431 "framework_get_reactors", 00:06:06.431 "thread_get_io_channels", 00:06:06.431 "thread_get_pollers", 00:06:06.431 "thread_get_stats", 00:06:06.431 "framework_monitor_context_switch", 00:06:06.431 "spdk_kill_instance", 00:06:06.431 "log_enable_timestamps", 00:06:06.431 "log_get_flags", 00:06:06.431 "log_clear_flag", 00:06:06.431 "log_set_flag", 00:06:06.431 "log_get_level", 00:06:06.431 "log_set_level", 00:06:06.431 "log_get_print_level", 00:06:06.431 "log_set_print_level", 00:06:06.431 "framework_enable_cpumask_locks", 00:06:06.431 "framework_disable_cpumask_locks", 00:06:06.431 "framework_wait_init", 00:06:06.431 "framework_start_init", 00:06:06.431 "scsi_get_devices", 00:06:06.431 "bdev_get_histogram", 00:06:06.431 "bdev_enable_histogram", 00:06:06.431 "bdev_set_qos_limit", 00:06:06.431 "bdev_set_qd_sampling_period", 00:06:06.431 "bdev_get_bdevs", 00:06:06.431 "bdev_reset_iostat", 00:06:06.431 "bdev_get_iostat", 00:06:06.431 "bdev_examine", 00:06:06.431 "bdev_wait_for_examine", 00:06:06.431 "bdev_set_options", 00:06:06.431 "notify_get_notifications", 00:06:06.431 "notify_get_types", 00:06:06.431 "accel_get_stats", 00:06:06.431 "accel_set_options", 00:06:06.431 "accel_set_driver", 00:06:06.431 "accel_crypto_key_destroy", 00:06:06.431 "accel_crypto_keys_get", 00:06:06.431 "accel_crypto_key_create", 00:06:06.431 "accel_assign_opc", 00:06:06.431 "accel_get_module_info", 00:06:06.431 "accel_get_opc_assignments", 00:06:06.431 "vmd_rescan", 00:06:06.431 "vmd_remove_device", 00:06:06.431 "vmd_enable", 00:06:06.431 "sock_set_default_impl", 00:06:06.431 "sock_impl_set_options", 00:06:06.431 "sock_impl_get_options", 00:06:06.431 "iobuf_get_stats", 00:06:06.431 "iobuf_set_options", 00:06:06.431 "framework_get_pci_devices", 00:06:06.431 "framework_get_config", 00:06:06.431 "framework_get_subsystems", 00:06:06.431 "vfu_tgt_set_base_path", 00:06:06.431 "trace_get_info", 00:06:06.431 "trace_get_tpoint_group_mask", 00:06:06.431 "trace_disable_tpoint_group", 00:06:06.431 "trace_enable_tpoint_group", 00:06:06.431 "trace_clear_tpoint_mask", 00:06:06.431 "trace_set_tpoint_mask", 00:06:06.431 "spdk_get_version", 00:06:06.431 "rpc_get_methods" 00:06:06.431 ] 00:06:06.431 01:27:19 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:06.431 01:27:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:06.431 01:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:06.431 01:27:19 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:06.431 01:27:19 -- spdkcli/tcp.sh@38 -- # killprocess 3657641 00:06:06.431 01:27:19 -- common/autotest_common.sh@926 -- # '[' -z 3657641 ']' 00:06:06.431 01:27:19 -- common/autotest_common.sh@930 -- # kill -0 3657641 00:06:06.431 01:27:19 -- common/autotest_common.sh@931 -- # uname 00:06:06.431 01:27:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.431 01:27:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3657641 00:06:06.431 01:27:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:06.431 01:27:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:06.431 01:27:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3657641' 00:06:06.431 killing process with pid 3657641 00:06:06.431 01:27:19 -- common/autotest_common.sh@945 -- # kill 3657641 00:06:06.431 01:27:19 -- common/autotest_common.sh@950 -- # wait 3657641 00:06:06.997 00:06:06.997 real 0m1.699s 00:06:06.997 user 0m3.306s 00:06:06.997 sys 0m0.471s 00:06:06.997 01:27:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.997 01:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:06.997 ************************************ 00:06:06.997 END TEST spdkcli_tcp 00:06:06.997 ************************************ 00:06:06.997 01:27:19 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.998 01:27:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:06.998 01:27:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.998 01:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:06.998 ************************************ 00:06:06.998 START TEST dpdk_mem_utility 00:06:06.998 ************************************ 00:06:06.998 01:27:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:06.998 * Looking for test storage... 00:06:06.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:06.998 01:27:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:06.998 01:27:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3657976 00:06:06.998 01:27:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:06.998 01:27:19 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3657976 00:06:06.998 01:27:19 -- common/autotest_common.sh@819 -- # '[' -z 3657976 ']' 00:06:06.998 01:27:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.998 01:27:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:06.998 01:27:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.998 01:27:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:06.998 01:27:19 -- common/autotest_common.sh@10 -- # set +x 00:06:06.998 [2024-07-23 01:27:20.027111] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.998 [2024-07-23 01:27:20.027215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3657976 ] 00:06:06.998 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.998 [2024-07-23 01:27:20.085837] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.256 [2024-07-23 01:27:20.168542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.256 [2024-07-23 01:27:20.168752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.189 01:27:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.189 01:27:20 -- common/autotest_common.sh@852 -- # return 0 00:06:08.189 01:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:08.189 01:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:08.190 01:27:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.190 01:27:20 -- common/autotest_common.sh@10 -- # set +x 00:06:08.190 { 00:06:08.190 "filename": "/tmp/spdk_mem_dump.txt" 00:06:08.190 } 00:06:08.190 01:27:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.190 01:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:08.190 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:08.190 1 heaps totaling size 814.000000 MiB 00:06:08.190 size: 814.000000 MiB heap id: 0 00:06:08.190 end heaps---------- 00:06:08.190 8 mempools totaling size 598.116089 MiB 00:06:08.190 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:08.190 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:08.190 size: 84.521057 MiB name: bdev_io_3657976 00:06:08.190 size: 51.011292 MiB name: evtpool_3657976 00:06:08.190 size: 50.003479 MiB name: msgpool_3657976 00:06:08.190 size: 21.763794 MiB name: PDU_Pool 00:06:08.190 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:08.190 size: 0.026123 MiB name: Session_Pool 00:06:08.190 end mempools------- 00:06:08.190 6 memzones totaling size 4.142822 MiB 00:06:08.190 size: 1.000366 MiB name: RG_ring_0_3657976 00:06:08.190 size: 1.000366 MiB name: RG_ring_1_3657976 00:06:08.190 size: 1.000366 MiB name: RG_ring_4_3657976 00:06:08.190 size: 1.000366 MiB name: RG_ring_5_3657976 00:06:08.190 size: 0.125366 MiB name: RG_ring_2_3657976 00:06:08.190 size: 0.015991 MiB name: RG_ring_3_3657976 00:06:08.190 end memzones------- 00:06:08.190 01:27:20 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:08.190 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:08.190 list of free elements. size: 12.519348 MiB 00:06:08.190 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:08.190 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:08.190 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:08.190 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:08.190 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:08.190 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:08.190 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:08.190 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:08.190 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:08.190 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:08.190 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:08.190 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:08.190 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:08.190 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:08.190 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:08.190 list of standard malloc elements. size: 199.218079 MiB 00:06:08.190 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:08.190 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:08.190 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:08.190 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:08.190 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:08.190 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:08.190 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:08.190 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:08.190 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:08.190 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:08.190 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:08.190 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:08.190 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:08.190 list of memzone associated elements. size: 602.262573 MiB 00:06:08.190 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:08.190 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:08.190 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:08.190 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:08.190 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:08.190 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3657976_0 00:06:08.190 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:08.190 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3657976_0 00:06:08.190 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:08.190 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3657976_0 00:06:08.190 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:08.190 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:08.190 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:08.190 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:08.190 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:08.190 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3657976 00:06:08.190 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:08.190 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3657976 00:06:08.190 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:08.190 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3657976 00:06:08.190 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:08.190 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:08.190 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:08.190 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:08.190 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:08.190 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:08.190 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:08.190 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:08.190 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:08.190 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3657976 00:06:08.190 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:08.190 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3657976 00:06:08.190 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:08.190 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3657976 00:06:08.190 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:08.190 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3657976 00:06:08.190 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:08.190 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3657976 00:06:08.190 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:08.190 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:08.190 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:08.190 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:08.190 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:08.190 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:08.190 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:08.190 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3657976 00:06:08.190 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:08.190 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:08.190 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:08.190 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:08.190 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:08.190 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3657976 00:06:08.190 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:08.190 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:08.190 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:08.190 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3657976 00:06:08.190 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:08.190 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3657976 00:06:08.190 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:08.191 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:08.191 01:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:08.191 01:27:21 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3657976 00:06:08.191 01:27:21 -- common/autotest_common.sh@926 -- # '[' -z 3657976 ']' 00:06:08.191 01:27:21 -- common/autotest_common.sh@930 -- # kill -0 3657976 00:06:08.191 01:27:21 -- common/autotest_common.sh@931 -- # uname 00:06:08.191 01:27:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.191 01:27:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3657976 00:06:08.191 01:27:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.191 01:27:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.191 01:27:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3657976' 00:06:08.191 killing process with pid 3657976 00:06:08.191 01:27:21 -- common/autotest_common.sh@945 -- # kill 3657976 00:06:08.191 01:27:21 -- common/autotest_common.sh@950 -- # wait 3657976 00:06:08.449 00:06:08.449 real 0m1.560s 00:06:08.449 user 0m1.708s 00:06:08.449 sys 0m0.426s 00:06:08.449 01:27:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.449 01:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.449 ************************************ 00:06:08.449 END TEST dpdk_mem_utility 00:06:08.449 ************************************ 00:06:08.449 01:27:21 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.449 01:27:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.449 01:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.449 01:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.449 ************************************ 00:06:08.449 START TEST event 00:06:08.449 ************************************ 00:06:08.449 01:27:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:08.449 * Looking for test storage... 00:06:08.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:08.707 01:27:21 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:08.707 01:27:21 -- bdev/nbd_common.sh@6 -- # set -e 00:06:08.707 01:27:21 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.707 01:27:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:08.707 01:27:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.707 01:27:21 -- common/autotest_common.sh@10 -- # set +x 00:06:08.707 ************************************ 00:06:08.707 START TEST event_perf 00:06:08.707 ************************************ 00:06:08.707 01:27:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:08.707 Running I/O for 1 seconds...[2024-07-23 01:27:21.566134] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:08.707 [2024-07-23 01:27:21.566208] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658170 ] 00:06:08.707 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.707 [2024-07-23 01:27:21.625485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:08.707 [2024-07-23 01:27:21.714579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.707 [2024-07-23 01:27:21.714646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.707 [2024-07-23 01:27:21.714711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.707 [2024-07-23 01:27:21.714714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.075 Running I/O for 1 seconds... 00:06:10.075 lcore 0: 231277 00:06:10.075 lcore 1: 231277 00:06:10.075 lcore 2: 231276 00:06:10.075 lcore 3: 231277 00:06:10.075 done. 00:06:10.075 00:06:10.075 real 0m1.245s 00:06:10.075 user 0m4.151s 00:06:10.075 sys 0m0.090s 00:06:10.075 01:27:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.075 01:27:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 ************************************ 00:06:10.075 END TEST event_perf 00:06:10.075 ************************************ 00:06:10.075 01:27:22 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.075 01:27:22 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:10.075 01:27:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.075 01:27:22 -- common/autotest_common.sh@10 -- # set +x 00:06:10.075 ************************************ 00:06:10.075 START TEST event_reactor 00:06:10.075 ************************************ 00:06:10.075 01:27:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:10.075 [2024-07-23 01:27:22.835231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:10.075 [2024-07-23 01:27:22.835313] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658329 ] 00:06:10.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.075 [2024-07-23 01:27:22.900827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.075 [2024-07-23 01:27:22.989072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.009 test_start 00:06:11.009 oneshot 00:06:11.009 tick 100 00:06:11.009 tick 100 00:06:11.009 tick 250 00:06:11.009 tick 100 00:06:11.009 tick 100 00:06:11.009 tick 100 00:06:11.009 tick 250 00:06:11.009 tick 500 00:06:11.009 tick 100 00:06:11.009 tick 100 00:06:11.009 tick 250 00:06:11.009 tick 100 00:06:11.009 tick 100 00:06:11.009 test_end 00:06:11.009 00:06:11.009 real 0m1.243s 00:06:11.009 user 0m1.152s 00:06:11.009 sys 0m0.086s 00:06:11.009 01:27:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.009 01:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.009 ************************************ 00:06:11.009 END TEST event_reactor 00:06:11.009 ************************************ 00:06:11.009 01:27:24 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.009 01:27:24 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:11.010 01:27:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.010 01:27:24 -- common/autotest_common.sh@10 -- # set +x 00:06:11.010 ************************************ 00:06:11.010 START TEST event_reactor_perf 00:06:11.010 ************************************ 00:06:11.010 01:27:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:11.010 [2024-07-23 01:27:24.100471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:11.010 [2024-07-23 01:27:24.100537] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658493 ] 00:06:11.267 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.267 [2024-07-23 01:27:24.161843] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.267 [2024-07-23 01:27:24.252526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.640 test_start 00:06:12.640 test_end 00:06:12.640 Performance: 353088 events per second 00:06:12.640 00:06:12.640 real 0m1.243s 00:06:12.640 user 0m1.158s 00:06:12.640 sys 0m0.080s 00:06:12.640 01:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.640 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 ************************************ 00:06:12.640 END TEST event_reactor_perf 00:06:12.640 ************************************ 00:06:12.640 01:27:25 -- event/event.sh@49 -- # uname -s 00:06:12.640 01:27:25 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:12.640 01:27:25 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.640 01:27:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.640 01:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.640 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 ************************************ 00:06:12.640 START TEST event_scheduler 00:06:12.640 ************************************ 00:06:12.640 01:27:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:12.640 * Looking for test storage... 00:06:12.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3658675 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@37 -- # waitforlisten 3658675 00:06:12.640 01:27:25 -- common/autotest_common.sh@819 -- # '[' -z 3658675 ']' 00:06:12.640 01:27:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.640 01:27:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:12.640 01:27:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.640 01:27:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:12.640 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 [2024-07-23 01:27:25.445444] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:12.640 [2024-07-23 01:27:25.445541] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3658675 ] 00:06:12.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.640 [2024-07-23 01:27:25.503381] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.640 [2024-07-23 01:27:25.593845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.640 [2024-07-23 01:27:25.593911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.640 [2024-07-23 01:27:25.593987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.640 [2024-07-23 01:27:25.593990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.640 01:27:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.640 01:27:25 -- common/autotest_common.sh@852 -- # return 0 00:06:12.640 01:27:25 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:12.640 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.640 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.640 POWER: Env isn't set yet! 00:06:12.640 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:12.640 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:12.640 POWER: Cannot get available frequencies of lcore 0 00:06:12.640 POWER: Attempting to initialise PSTAT power management... 00:06:12.640 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:12.641 POWER: Initialized successfully for lcore 0 power management 00:06:12.641 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:12.641 POWER: Initialized successfully for lcore 1 power management 00:06:12.641 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:12.641 POWER: Initialized successfully for lcore 2 power management 00:06:12.641 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:12.641 POWER: Initialized successfully for lcore 3 power management 00:06:12.641 [2024-07-23 01:27:25.706805] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:12.641 [2024-07-23 01:27:25.706824] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:12.641 [2024-07-23 01:27:25.706835] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:12.641 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.641 01:27:25 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:12.641 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.641 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 [2024-07-23 01:27:25.808055] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:12.899 01:27:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:12.899 01:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 ************************************ 00:06:12.899 START TEST scheduler_create_thread 00:06:12.899 ************************************ 00:06:12.899 01:27:25 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 2 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 3 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 4 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 5 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 6 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 7 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 8 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 9 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 10 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:12.899 01:27:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:12.899 01:27:25 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:12.899 01:27:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:12.899 01:27:25 -- common/autotest_common.sh@10 -- # set +x 00:06:14.271 01:27:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:14.271 01:27:27 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:14.271 01:27:27 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:14.271 01:27:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.271 01:27:27 -- common/autotest_common.sh@10 -- # set +x 00:06:15.642 01:27:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.642 00:06:15.642 real 0m2.617s 00:06:15.642 user 0m0.014s 00:06:15.642 sys 0m0.001s 00:06:15.642 01:27:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.642 01:27:28 -- common/autotest_common.sh@10 -- # set +x 00:06:15.642 ************************************ 00:06:15.642 END TEST scheduler_create_thread 00:06:15.642 ************************************ 00:06:15.642 01:27:28 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:15.642 01:27:28 -- scheduler/scheduler.sh@46 -- # killprocess 3658675 00:06:15.642 01:27:28 -- common/autotest_common.sh@926 -- # '[' -z 3658675 ']' 00:06:15.642 01:27:28 -- common/autotest_common.sh@930 -- # kill -0 3658675 00:06:15.642 01:27:28 -- common/autotest_common.sh@931 -- # uname 00:06:15.642 01:27:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.642 01:27:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3658675 00:06:15.642 01:27:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:15.642 01:27:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:15.642 01:27:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3658675' 00:06:15.642 killing process with pid 3658675 00:06:15.642 01:27:28 -- common/autotest_common.sh@945 -- # kill 3658675 00:06:15.642 01:27:28 -- common/autotest_common.sh@950 -- # wait 3658675 00:06:15.900 [2024-07-23 01:27:28.911169] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:16.158 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:16.158 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:16.158 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:16.158 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:16.158 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:16.158 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:16.158 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:16.158 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:16.158 00:06:16.158 real 0m3.782s 00:06:16.158 user 0m5.815s 00:06:16.158 sys 0m0.291s 00:06:16.158 01:27:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.158 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 ************************************ 00:06:16.158 END TEST event_scheduler 00:06:16.158 ************************************ 00:06:16.158 01:27:29 -- event/event.sh@51 -- # modprobe -n nbd 00:06:16.158 01:27:29 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:16.158 01:27:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:16.158 01:27:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:16.158 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 ************************************ 00:06:16.158 START TEST app_repeat 00:06:16.158 ************************************ 00:06:16.158 01:27:29 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:16.158 01:27:29 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.158 01:27:29 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.158 01:27:29 -- event/event.sh@13 -- # local nbd_list 00:06:16.158 01:27:29 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.158 01:27:29 -- event/event.sh@14 -- # local bdev_list 00:06:16.158 01:27:29 -- event/event.sh@15 -- # local repeat_times=4 00:06:16.158 01:27:29 -- event/event.sh@17 -- # modprobe nbd 00:06:16.158 01:27:29 -- event/event.sh@19 -- # repeat_pid=3659256 00:06:16.158 01:27:29 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:16.158 01:27:29 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.158 01:27:29 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3659256' 00:06:16.158 Process app_repeat pid: 3659256 00:06:16.158 01:27:29 -- event/event.sh@23 -- # for i in {0..2} 00:06:16.158 01:27:29 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:16.158 spdk_app_start Round 0 00:06:16.158 01:27:29 -- event/event.sh@25 -- # waitforlisten 3659256 /var/tmp/spdk-nbd.sock 00:06:16.158 01:27:29 -- common/autotest_common.sh@819 -- # '[' -z 3659256 ']' 00:06:16.158 01:27:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.158 01:27:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:16.158 01:27:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.158 01:27:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:16.158 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:06:16.158 [2024-07-23 01:27:29.193714] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:16.158 [2024-07-23 01:27:29.193787] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3659256 ] 00:06:16.158 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.158 [2024-07-23 01:27:29.252752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.417 [2024-07-23 01:27:29.336407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.417 [2024-07-23 01:27:29.336411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.385 01:27:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.386 01:27:30 -- common/autotest_common.sh@852 -- # return 0 00:06:17.386 01:27:30 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.386 Malloc0 00:06:17.386 01:27:30 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.646 Malloc1 00:06:17.647 01:27:30 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.647 01:27:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.904 /dev/nbd0 00:06:17.904 01:27:30 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.904 01:27:30 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.904 01:27:30 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:17.904 01:27:30 -- common/autotest_common.sh@857 -- # local i 00:06:17.904 01:27:30 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.905 01:27:30 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.905 01:27:30 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:17.905 01:27:30 -- common/autotest_common.sh@861 -- # break 00:06:17.905 01:27:30 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.905 01:27:30 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.905 01:27:30 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.905 1+0 records in 00:06:17.905 1+0 records out 00:06:17.905 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230072 s, 17.8 MB/s 00:06:17.905 01:27:30 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.905 01:27:30 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.905 01:27:30 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:17.905 01:27:30 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.905 01:27:30 -- common/autotest_common.sh@877 -- # return 0 00:06:17.905 01:27:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.905 01:27:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.905 01:27:30 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:18.163 /dev/nbd1 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:18.163 01:27:31 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:18.163 01:27:31 -- common/autotest_common.sh@857 -- # local i 00:06:18.163 01:27:31 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:18.163 01:27:31 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:18.163 01:27:31 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:18.163 01:27:31 -- common/autotest_common.sh@861 -- # break 00:06:18.163 01:27:31 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:18.163 01:27:31 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:18.163 01:27:31 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:18.163 1+0 records in 00:06:18.163 1+0 records out 00:06:18.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196477 s, 20.8 MB/s 00:06:18.163 01:27:31 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.163 01:27:31 -- common/autotest_common.sh@874 -- # size=4096 00:06:18.163 01:27:31 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:18.163 01:27:31 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:18.163 01:27:31 -- common/autotest_common.sh@877 -- # return 0 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.163 01:27:31 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.421 01:27:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.421 { 00:06:18.421 "nbd_device": "/dev/nbd0", 00:06:18.421 "bdev_name": "Malloc0" 00:06:18.421 }, 00:06:18.421 { 00:06:18.421 "nbd_device": "/dev/nbd1", 00:06:18.421 "bdev_name": "Malloc1" 00:06:18.421 } 00:06:18.421 ]' 00:06:18.421 01:27:31 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.421 { 00:06:18.421 "nbd_device": "/dev/nbd0", 00:06:18.421 "bdev_name": "Malloc0" 00:06:18.421 }, 00:06:18.421 { 00:06:18.421 "nbd_device": "/dev/nbd1", 00:06:18.421 "bdev_name": "Malloc1" 00:06:18.421 } 00:06:18.421 ]' 00:06:18.421 01:27:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.421 01:27:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.421 /dev/nbd1' 00:06:18.421 01:27:31 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.421 /dev/nbd1' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.679 256+0 records in 00:06:18.679 256+0 records out 00:06:18.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00411689 s, 255 MB/s 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.679 256+0 records in 00:06:18.679 256+0 records out 00:06:18.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236251 s, 44.4 MB/s 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.679 256+0 records in 00:06:18.679 256+0 records out 00:06:18.679 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251066 s, 41.8 MB/s 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.679 01:27:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@41 -- # break 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.937 01:27:31 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@41 -- # break 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.195 01:27:32 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@65 -- # true 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.452 01:27:32 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.452 01:27:32 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.710 01:27:32 -- event/event.sh@35 -- # sleep 3 00:06:19.968 [2024-07-23 01:27:32.887809] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.968 [2024-07-23 01:27:32.977171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.968 [2024-07-23 01:27:32.977171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.968 [2024-07-23 01:27:33.038683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.968 [2024-07-23 01:27:33.038753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.249 01:27:35 -- event/event.sh@23 -- # for i in {0..2} 00:06:23.249 01:27:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.249 spdk_app_start Round 1 00:06:23.249 01:27:35 -- event/event.sh@25 -- # waitforlisten 3659256 /var/tmp/spdk-nbd.sock 00:06:23.249 01:27:35 -- common/autotest_common.sh@819 -- # '[' -z 3659256 ']' 00:06:23.249 01:27:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.249 01:27:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.249 01:27:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.249 01:27:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.249 01:27:35 -- common/autotest_common.sh@10 -- # set +x 00:06:23.249 01:27:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.249 01:27:35 -- common/autotest_common.sh@852 -- # return 0 00:06:23.249 01:27:35 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.249 Malloc0 00:06:23.249 01:27:36 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.508 Malloc1 00:06:23.508 01:27:36 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@12 -- # local i 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.508 01:27:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:23.766 /dev/nbd0 00:06:23.766 01:27:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:23.766 01:27:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:23.766 01:27:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:23.766 01:27:36 -- common/autotest_common.sh@857 -- # local i 00:06:23.766 01:27:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:23.766 01:27:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:23.766 01:27:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:23.766 01:27:36 -- common/autotest_common.sh@861 -- # break 00:06:23.766 01:27:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:23.766 01:27:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:23.766 01:27:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.766 1+0 records in 00:06:23.766 1+0 records out 00:06:23.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143106 s, 28.6 MB/s 00:06:23.766 01:27:36 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.766 01:27:36 -- common/autotest_common.sh@874 -- # size=4096 00:06:23.766 01:27:36 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:23.766 01:27:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:23.766 01:27:36 -- common/autotest_common.sh@877 -- # return 0 00:06:23.766 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.766 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.766 01:27:36 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:24.024 /dev/nbd1 00:06:24.024 01:27:36 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:24.024 01:27:36 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:24.024 01:27:36 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:24.024 01:27:36 -- common/autotest_common.sh@857 -- # local i 00:06:24.024 01:27:36 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:24.024 01:27:36 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:24.024 01:27:36 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:24.024 01:27:36 -- common/autotest_common.sh@861 -- # break 00:06:24.024 01:27:36 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:24.024 01:27:36 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:24.024 01:27:36 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:24.024 1+0 records in 00:06:24.024 1+0 records out 00:06:24.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182951 s, 22.4 MB/s 00:06:24.024 01:27:36 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.024 01:27:36 -- common/autotest_common.sh@874 -- # size=4096 00:06:24.025 01:27:36 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:24.025 01:27:36 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:24.025 01:27:36 -- common/autotest_common.sh@877 -- # return 0 00:06:24.025 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:24.025 01:27:36 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.025 01:27:36 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.025 01:27:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.025 01:27:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:24.283 { 00:06:24.283 "nbd_device": "/dev/nbd0", 00:06:24.283 "bdev_name": "Malloc0" 00:06:24.283 }, 00:06:24.283 { 00:06:24.283 "nbd_device": "/dev/nbd1", 00:06:24.283 "bdev_name": "Malloc1" 00:06:24.283 } 00:06:24.283 ]' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:24.283 { 00:06:24.283 "nbd_device": "/dev/nbd0", 00:06:24.283 "bdev_name": "Malloc0" 00:06:24.283 }, 00:06:24.283 { 00:06:24.283 "nbd_device": "/dev/nbd1", 00:06:24.283 "bdev_name": "Malloc1" 00:06:24.283 } 00:06:24.283 ]' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:24.283 /dev/nbd1' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:24.283 /dev/nbd1' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@65 -- # count=2 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@95 -- # count=2 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:24.283 256+0 records in 00:06:24.283 256+0 records out 00:06:24.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491762 s, 213 MB/s 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:24.283 256+0 records in 00:06:24.283 256+0 records out 00:06:24.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233157 s, 45.0 MB/s 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:24.283 256+0 records in 00:06:24.283 256+0 records out 00:06:24.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254279 s, 41.2 MB/s 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@51 -- # local i 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.283 01:27:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@41 -- # break 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.541 01:27:37 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@41 -- # break 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.800 01:27:37 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@65 -- # true 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.058 01:27:38 -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.058 01:27:38 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:25.316 01:27:38 -- event/event.sh@35 -- # sleep 3 00:06:25.574 [2024-07-23 01:27:38.594121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.832 [2024-07-23 01:27:38.682517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.832 [2024-07-23 01:27:38.682522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.832 [2024-07-23 01:27:38.743734] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.832 [2024-07-23 01:27:38.743806] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:28.360 01:27:41 -- event/event.sh@23 -- # for i in {0..2} 00:06:28.361 01:27:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:28.361 spdk_app_start Round 2 00:06:28.361 01:27:41 -- event/event.sh@25 -- # waitforlisten 3659256 /var/tmp/spdk-nbd.sock 00:06:28.361 01:27:41 -- common/autotest_common.sh@819 -- # '[' -z 3659256 ']' 00:06:28.361 01:27:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:28.361 01:27:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.361 01:27:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:28.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:28.361 01:27:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.361 01:27:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.619 01:27:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.619 01:27:41 -- common/autotest_common.sh@852 -- # return 0 00:06:28.619 01:27:41 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.877 Malloc0 00:06:28.877 01:27:41 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:29.135 Malloc1 00:06:29.135 01:27:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@12 -- # local i 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.135 01:27:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:29.392 /dev/nbd0 00:06:29.392 01:27:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:29.392 01:27:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:29.392 01:27:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:29.392 01:27:42 -- common/autotest_common.sh@857 -- # local i 00:06:29.392 01:27:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.392 01:27:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.392 01:27:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:29.392 01:27:42 -- common/autotest_common.sh@861 -- # break 00:06:29.392 01:27:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.392 01:27:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.393 01:27:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.393 1+0 records in 00:06:29.393 1+0 records out 00:06:29.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193618 s, 21.2 MB/s 00:06:29.393 01:27:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.393 01:27:42 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.393 01:27:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.393 01:27:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.393 01:27:42 -- common/autotest_common.sh@877 -- # return 0 00:06:29.393 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.393 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.393 01:27:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:29.651 /dev/nbd1 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:29.651 01:27:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:29.651 01:27:42 -- common/autotest_common.sh@857 -- # local i 00:06:29.651 01:27:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:29.651 01:27:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:29.651 01:27:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:29.651 01:27:42 -- common/autotest_common.sh@861 -- # break 00:06:29.651 01:27:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:29.651 01:27:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:29.651 01:27:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:29.651 1+0 records in 00:06:29.651 1+0 records out 00:06:29.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0188061 s, 218 kB/s 00:06:29.651 01:27:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.651 01:27:42 -- common/autotest_common.sh@874 -- # size=4096 00:06:29.651 01:27:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:29.651 01:27:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:29.651 01:27:42 -- common/autotest_common.sh@877 -- # return 0 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.651 01:27:42 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.909 01:27:42 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.909 { 00:06:29.909 "nbd_device": "/dev/nbd0", 00:06:29.909 "bdev_name": "Malloc0" 00:06:29.909 }, 00:06:29.909 { 00:06:29.910 "nbd_device": "/dev/nbd1", 00:06:29.910 "bdev_name": "Malloc1" 00:06:29.910 } 00:06:29.910 ]' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.910 { 00:06:29.910 "nbd_device": "/dev/nbd0", 00:06:29.910 "bdev_name": "Malloc0" 00:06:29.910 }, 00:06:29.910 { 00:06:29.910 "nbd_device": "/dev/nbd1", 00:06:29.910 "bdev_name": "Malloc1" 00:06:29.910 } 00:06:29.910 ]' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.910 /dev/nbd1' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.910 /dev/nbd1' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.910 256+0 records in 00:06:29.910 256+0 records out 00:06:29.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00507504 s, 207 MB/s 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.910 256+0 records in 00:06:29.910 256+0 records out 00:06:29.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236481 s, 44.3 MB/s 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.910 256+0 records in 00:06:29.910 256+0 records out 00:06:29.910 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251506 s, 41.7 MB/s 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@51 -- # local i 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.910 01:27:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@41 -- # break 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:30.168 01:27:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@41 -- # break 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.426 01:27:43 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.682 01:27:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.682 01:27:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.682 01:27:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@65 -- # true 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.940 01:27:43 -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.940 01:27:43 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:31.199 01:27:44 -- event/event.sh@35 -- # sleep 3 00:06:31.199 [2024-07-23 01:27:44.287822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.458 [2024-07-23 01:27:44.375648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.458 [2024-07-23 01:27:44.375654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.458 [2024-07-23 01:27:44.436958] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:31.458 [2024-07-23 01:27:44.437038] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.985 01:27:47 -- event/event.sh@38 -- # waitforlisten 3659256 /var/tmp/spdk-nbd.sock 00:06:33.985 01:27:47 -- common/autotest_common.sh@819 -- # '[' -z 3659256 ']' 00:06:33.985 01:27:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.985 01:27:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.985 01:27:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.985 01:27:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.985 01:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.243 01:27:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.243 01:27:47 -- common/autotest_common.sh@852 -- # return 0 00:06:34.243 01:27:47 -- event/event.sh@39 -- # killprocess 3659256 00:06:34.243 01:27:47 -- common/autotest_common.sh@926 -- # '[' -z 3659256 ']' 00:06:34.243 01:27:47 -- common/autotest_common.sh@930 -- # kill -0 3659256 00:06:34.243 01:27:47 -- common/autotest_common.sh@931 -- # uname 00:06:34.243 01:27:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:34.243 01:27:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3659256 00:06:34.243 01:27:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:34.243 01:27:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:34.243 01:27:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3659256' 00:06:34.243 killing process with pid 3659256 00:06:34.243 01:27:47 -- common/autotest_common.sh@945 -- # kill 3659256 00:06:34.243 01:27:47 -- common/autotest_common.sh@950 -- # wait 3659256 00:06:34.504 spdk_app_start is called in Round 0. 00:06:34.504 Shutdown signal received, stop current app iteration 00:06:34.504 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:34.504 spdk_app_start is called in Round 1. 00:06:34.504 Shutdown signal received, stop current app iteration 00:06:34.504 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:34.504 spdk_app_start is called in Round 2. 00:06:34.504 Shutdown signal received, stop current app iteration 00:06:34.504 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:34.504 spdk_app_start is called in Round 3. 00:06:34.504 Shutdown signal received, stop current app iteration 00:06:34.504 01:27:47 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:34.504 01:27:47 -- event/event.sh@42 -- # return 0 00:06:34.504 00:06:34.504 real 0m18.362s 00:06:34.504 user 0m39.902s 00:06:34.504 sys 0m3.199s 00:06:34.504 01:27:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.504 01:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.504 ************************************ 00:06:34.504 END TEST app_repeat 00:06:34.504 ************************************ 00:06:34.504 01:27:47 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:34.504 01:27:47 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:34.504 01:27:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.504 01:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.504 01:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.504 ************************************ 00:06:34.504 START TEST cpu_locks 00:06:34.504 ************************************ 00:06:34.504 01:27:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:34.791 * Looking for test storage... 00:06:34.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:34.791 01:27:47 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:34.791 01:27:47 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:34.791 01:27:47 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:34.791 01:27:47 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:34.791 01:27:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:34.791 01:27:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.791 01:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 ************************************ 00:06:34.791 START TEST default_locks 00:06:34.791 ************************************ 00:06:34.791 01:27:47 -- common/autotest_common.sh@1104 -- # default_locks 00:06:34.791 01:27:47 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3661675 00:06:34.791 01:27:47 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:34.791 01:27:47 -- event/cpu_locks.sh@47 -- # waitforlisten 3661675 00:06:34.791 01:27:47 -- common/autotest_common.sh@819 -- # '[' -z 3661675 ']' 00:06:34.791 01:27:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.791 01:27:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.791 01:27:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.791 01:27:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.791 01:27:47 -- common/autotest_common.sh@10 -- # set +x 00:06:34.791 [2024-07-23 01:27:47.663858] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.791 [2024-07-23 01:27:47.663957] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661675 ] 00:06:34.791 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.791 [2024-07-23 01:27:47.722102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.791 [2024-07-23 01:27:47.803608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:34.791 [2024-07-23 01:27:47.803794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.729 01:27:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.729 01:27:48 -- common/autotest_common.sh@852 -- # return 0 00:06:35.729 01:27:48 -- event/cpu_locks.sh@49 -- # locks_exist 3661675 00:06:35.729 01:27:48 -- event/cpu_locks.sh@22 -- # lslocks -p 3661675 00:06:35.729 01:27:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.989 lslocks: write error 00:06:35.989 01:27:48 -- event/cpu_locks.sh@50 -- # killprocess 3661675 00:06:35.989 01:27:48 -- common/autotest_common.sh@926 -- # '[' -z 3661675 ']' 00:06:35.989 01:27:48 -- common/autotest_common.sh@930 -- # kill -0 3661675 00:06:35.989 01:27:48 -- common/autotest_common.sh@931 -- # uname 00:06:35.989 01:27:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.989 01:27:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3661675 00:06:35.989 01:27:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.989 01:27:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.989 01:27:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3661675' 00:06:35.989 killing process with pid 3661675 00:06:35.989 01:27:48 -- common/autotest_common.sh@945 -- # kill 3661675 00:06:35.989 01:27:48 -- common/autotest_common.sh@950 -- # wait 3661675 00:06:36.247 01:27:49 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3661675 00:06:36.247 01:27:49 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.247 01:27:49 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3661675 00:06:36.247 01:27:49 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.247 01:27:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.247 01:27:49 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.247 01:27:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.247 01:27:49 -- common/autotest_common.sh@643 -- # waitforlisten 3661675 00:06:36.247 01:27:49 -- common/autotest_common.sh@819 -- # '[' -z 3661675 ']' 00:06:36.247 01:27:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.247 01:27:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.247 01:27:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.247 01:27:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.247 01:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.247 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3661675) - No such process 00:06:36.247 ERROR: process (pid: 3661675) is no longer running 00:06:36.247 01:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.247 01:27:49 -- common/autotest_common.sh@852 -- # return 1 00:06:36.247 01:27:49 -- common/autotest_common.sh@643 -- # es=1 00:06:36.247 01:27:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:36.247 01:27:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:36.247 01:27:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:36.247 01:27:49 -- event/cpu_locks.sh@54 -- # no_locks 00:06:36.247 01:27:49 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:36.247 01:27:49 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:36.247 01:27:49 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:36.247 00:06:36.247 real 0m1.717s 00:06:36.247 user 0m1.840s 00:06:36.247 sys 0m0.555s 00:06:36.247 01:27:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.247 01:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.247 ************************************ 00:06:36.247 END TEST default_locks 00:06:36.247 ************************************ 00:06:36.505 01:27:49 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:36.505 01:27:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.505 01:27:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.505 01:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.506 ************************************ 00:06:36.506 START TEST default_locks_via_rpc 00:06:36.506 ************************************ 00:06:36.506 01:27:49 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:36.506 01:27:49 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3661972 00:06:36.506 01:27:49 -- event/cpu_locks.sh@63 -- # waitforlisten 3661972 00:06:36.506 01:27:49 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.506 01:27:49 -- common/autotest_common.sh@819 -- # '[' -z 3661972 ']' 00:06:36.506 01:27:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.506 01:27:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.506 01:27:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.506 01:27:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.506 01:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:36.506 [2024-07-23 01:27:49.411166] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.506 [2024-07-23 01:27:49.411253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3661972 ] 00:06:36.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.506 [2024-07-23 01:27:49.477813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.506 [2024-07-23 01:27:49.568059] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.506 [2024-07-23 01:27:49.568235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.439 01:27:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.439 01:27:50 -- common/autotest_common.sh@852 -- # return 0 00:06:37.439 01:27:50 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.439 01:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.439 01:27:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.439 01:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.439 01:27:50 -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.439 01:27:50 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.439 01:27:50 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.439 01:27:50 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.439 01:27:50 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.439 01:27:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.439 01:27:50 -- common/autotest_common.sh@10 -- # set +x 00:06:37.439 01:27:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.439 01:27:50 -- event/cpu_locks.sh@71 -- # locks_exist 3661972 00:06:37.439 01:27:50 -- event/cpu_locks.sh@22 -- # lslocks -p 3661972 00:06:37.439 01:27:50 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.697 01:27:50 -- event/cpu_locks.sh@73 -- # killprocess 3661972 00:06:37.697 01:27:50 -- common/autotest_common.sh@926 -- # '[' -z 3661972 ']' 00:06:37.697 01:27:50 -- common/autotest_common.sh@930 -- # kill -0 3661972 00:06:37.697 01:27:50 -- common/autotest_common.sh@931 -- # uname 00:06:37.697 01:27:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.697 01:27:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3661972 00:06:37.697 01:27:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.697 01:27:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.697 01:27:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3661972' 00:06:37.697 killing process with pid 3661972 00:06:37.697 01:27:50 -- common/autotest_common.sh@945 -- # kill 3661972 00:06:37.697 01:27:50 -- common/autotest_common.sh@950 -- # wait 3661972 00:06:38.264 00:06:38.264 real 0m1.741s 00:06:38.264 user 0m1.848s 00:06:38.264 sys 0m0.576s 00:06:38.264 01:27:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.264 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.264 ************************************ 00:06:38.264 END TEST default_locks_via_rpc 00:06:38.264 ************************************ 00:06:38.264 01:27:51 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.264 01:27:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.264 01:27:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.264 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.264 ************************************ 00:06:38.264 START TEST non_locking_app_on_locked_coremask 00:06:38.264 ************************************ 00:06:38.264 01:27:51 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:38.264 01:27:51 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3662143 00:06:38.264 01:27:51 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.264 01:27:51 -- event/cpu_locks.sh@81 -- # waitforlisten 3662143 /var/tmp/spdk.sock 00:06:38.264 01:27:51 -- common/autotest_common.sh@819 -- # '[' -z 3662143 ']' 00:06:38.264 01:27:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.264 01:27:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.264 01:27:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.264 01:27:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.264 01:27:51 -- common/autotest_common.sh@10 -- # set +x 00:06:38.264 [2024-07-23 01:27:51.172228] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.264 [2024-07-23 01:27:51.172312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662143 ] 00:06:38.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.264 [2024-07-23 01:27:51.230037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.264 [2024-07-23 01:27:51.316221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.264 [2024-07-23 01:27:51.316382] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.199 01:27:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.199 01:27:52 -- common/autotest_common.sh@852 -- # return 0 00:06:39.199 01:27:52 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3662285 00:06:39.199 01:27:52 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.199 01:27:52 -- event/cpu_locks.sh@85 -- # waitforlisten 3662285 /var/tmp/spdk2.sock 00:06:39.199 01:27:52 -- common/autotest_common.sh@819 -- # '[' -z 3662285 ']' 00:06:39.199 01:27:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.199 01:27:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.199 01:27:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.199 01:27:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.199 01:27:52 -- common/autotest_common.sh@10 -- # set +x 00:06:39.199 [2024-07-23 01:27:52.192280] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:39.199 [2024-07-23 01:27:52.192350] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662285 ] 00:06:39.199 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.199 [2024-07-23 01:27:52.284925] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.199 [2024-07-23 01:27:52.284958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.458 [2024-07-23 01:27:52.466706] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.458 [2024-07-23 01:27:52.466893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.025 01:27:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.025 01:27:53 -- common/autotest_common.sh@852 -- # return 0 00:06:40.025 01:27:53 -- event/cpu_locks.sh@87 -- # locks_exist 3662143 00:06:40.025 01:27:53 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.025 01:27:53 -- event/cpu_locks.sh@22 -- # lslocks -p 3662143 00:06:40.591 lslocks: write error 00:06:40.591 01:27:53 -- event/cpu_locks.sh@89 -- # killprocess 3662143 00:06:40.591 01:27:53 -- common/autotest_common.sh@926 -- # '[' -z 3662143 ']' 00:06:40.591 01:27:53 -- common/autotest_common.sh@930 -- # kill -0 3662143 00:06:40.591 01:27:53 -- common/autotest_common.sh@931 -- # uname 00:06:40.591 01:27:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.591 01:27:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3662143 00:06:40.591 01:27:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.591 01:27:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.591 01:27:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3662143' 00:06:40.591 killing process with pid 3662143 00:06:40.591 01:27:53 -- common/autotest_common.sh@945 -- # kill 3662143 00:06:40.591 01:27:53 -- common/autotest_common.sh@950 -- # wait 3662143 00:06:41.523 01:27:54 -- event/cpu_locks.sh@90 -- # killprocess 3662285 00:06:41.523 01:27:54 -- common/autotest_common.sh@926 -- # '[' -z 3662285 ']' 00:06:41.523 01:27:54 -- common/autotest_common.sh@930 -- # kill -0 3662285 00:06:41.523 01:27:54 -- common/autotest_common.sh@931 -- # uname 00:06:41.523 01:27:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.523 01:27:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3662285 00:06:41.523 01:27:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.523 01:27:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.523 01:27:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3662285' 00:06:41.523 killing process with pid 3662285 00:06:41.523 01:27:54 -- common/autotest_common.sh@945 -- # kill 3662285 00:06:41.523 01:27:54 -- common/autotest_common.sh@950 -- # wait 3662285 00:06:41.781 00:06:41.781 real 0m3.573s 00:06:41.781 user 0m3.905s 00:06:41.781 sys 0m1.056s 00:06:41.781 01:27:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.781 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.781 ************************************ 00:06:41.781 END TEST non_locking_app_on_locked_coremask 00:06:41.781 ************************************ 00:06:41.781 01:27:54 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.781 01:27:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:41.781 01:27:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.781 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.781 ************************************ 00:06:41.781 START TEST locking_app_on_unlocked_coremask 00:06:41.781 ************************************ 00:06:41.781 01:27:54 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:41.781 01:27:54 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3662595 00:06:41.781 01:27:54 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.781 01:27:54 -- event/cpu_locks.sh@99 -- # waitforlisten 3662595 /var/tmp/spdk.sock 00:06:41.782 01:27:54 -- common/autotest_common.sh@819 -- # '[' -z 3662595 ']' 00:06:41.782 01:27:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.782 01:27:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.782 01:27:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.782 01:27:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.782 01:27:54 -- common/autotest_common.sh@10 -- # set +x 00:06:41.782 [2024-07-23 01:27:54.779089] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:41.782 [2024-07-23 01:27:54.779168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662595 ] 00:06:41.782 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.782 [2024-07-23 01:27:54.837074] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.782 [2024-07-23 01:27:54.837112] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.040 [2024-07-23 01:27:54.922183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.040 [2024-07-23 01:27:54.922349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.974 01:27:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.974 01:27:55 -- common/autotest_common.sh@852 -- # return 0 00:06:42.974 01:27:55 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3662737 00:06:42.974 01:27:55 -- event/cpu_locks.sh@103 -- # waitforlisten 3662737 /var/tmp/spdk2.sock 00:06:42.974 01:27:55 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.974 01:27:55 -- common/autotest_common.sh@819 -- # '[' -z 3662737 ']' 00:06:42.974 01:27:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.974 01:27:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.974 01:27:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.974 01:27:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.974 01:27:55 -- common/autotest_common.sh@10 -- # set +x 00:06:42.974 [2024-07-23 01:27:55.758487] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.974 [2024-07-23 01:27:55.758573] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3662737 ] 00:06:42.974 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.974 [2024-07-23 01:27:55.856282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.974 [2024-07-23 01:27:56.036955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.974 [2024-07-23 01:27:56.037157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.908 01:27:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.908 01:27:56 -- common/autotest_common.sh@852 -- # return 0 00:06:43.908 01:27:56 -- event/cpu_locks.sh@105 -- # locks_exist 3662737 00:06:43.908 01:27:56 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.908 01:27:56 -- event/cpu_locks.sh@22 -- # lslocks -p 3662737 00:06:44.167 lslocks: write error 00:06:44.167 01:27:57 -- event/cpu_locks.sh@107 -- # killprocess 3662595 00:06:44.167 01:27:57 -- common/autotest_common.sh@926 -- # '[' -z 3662595 ']' 00:06:44.167 01:27:57 -- common/autotest_common.sh@930 -- # kill -0 3662595 00:06:44.167 01:27:57 -- common/autotest_common.sh@931 -- # uname 00:06:44.167 01:27:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.167 01:27:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3662595 00:06:44.167 01:27:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.167 01:27:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.167 01:27:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3662595' 00:06:44.167 killing process with pid 3662595 00:06:44.167 01:27:57 -- common/autotest_common.sh@945 -- # kill 3662595 00:06:44.167 01:27:57 -- common/autotest_common.sh@950 -- # wait 3662595 00:06:45.101 01:27:58 -- event/cpu_locks.sh@108 -- # killprocess 3662737 00:06:45.101 01:27:58 -- common/autotest_common.sh@926 -- # '[' -z 3662737 ']' 00:06:45.101 01:27:58 -- common/autotest_common.sh@930 -- # kill -0 3662737 00:06:45.101 01:27:58 -- common/autotest_common.sh@931 -- # uname 00:06:45.101 01:27:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.101 01:27:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3662737 00:06:45.101 01:27:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.101 01:27:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.101 01:27:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3662737' 00:06:45.101 killing process with pid 3662737 00:06:45.101 01:27:58 -- common/autotest_common.sh@945 -- # kill 3662737 00:06:45.101 01:27:58 -- common/autotest_common.sh@950 -- # wait 3662737 00:06:45.667 00:06:45.667 real 0m3.736s 00:06:45.667 user 0m4.067s 00:06:45.667 sys 0m1.086s 00:06:45.667 01:27:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.667 01:27:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.667 ************************************ 00:06:45.667 END TEST locking_app_on_unlocked_coremask 00:06:45.667 ************************************ 00:06:45.667 01:27:58 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:45.667 01:27:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:45.667 01:27:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.667 01:27:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.667 ************************************ 00:06:45.667 START TEST locking_app_on_locked_coremask 00:06:45.667 ************************************ 00:06:45.667 01:27:58 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:45.667 01:27:58 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3663174 00:06:45.667 01:27:58 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.667 01:27:58 -- event/cpu_locks.sh@116 -- # waitforlisten 3663174 /var/tmp/spdk.sock 00:06:45.667 01:27:58 -- common/autotest_common.sh@819 -- # '[' -z 3663174 ']' 00:06:45.667 01:27:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.667 01:27:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.667 01:27:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.667 01:27:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.667 01:27:58 -- common/autotest_common.sh@10 -- # set +x 00:06:45.667 [2024-07-23 01:27:58.533074] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:45.667 [2024-07-23 01:27:58.533169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663174 ] 00:06:45.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.667 [2024-07-23 01:27:58.593702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.667 [2024-07-23 01:27:58.679569] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:45.667 [2024-07-23 01:27:58.679785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.601 01:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.601 01:27:59 -- common/autotest_common.sh@852 -- # return 0 00:06:46.601 01:27:59 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3663308 00:06:46.601 01:27:59 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3663308 /var/tmp/spdk2.sock 00:06:46.601 01:27:59 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:46.601 01:27:59 -- common/autotest_common.sh@640 -- # local es=0 00:06:46.601 01:27:59 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3663308 /var/tmp/spdk2.sock 00:06:46.601 01:27:59 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:46.601 01:27:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.601 01:27:59 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:46.601 01:27:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:46.601 01:27:59 -- common/autotest_common.sh@643 -- # waitforlisten 3663308 /var/tmp/spdk2.sock 00:06:46.601 01:27:59 -- common/autotest_common.sh@819 -- # '[' -z 3663308 ']' 00:06:46.601 01:27:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:46.601 01:27:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:46.601 01:27:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:46.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:46.602 01:27:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:46.602 01:27:59 -- common/autotest_common.sh@10 -- # set +x 00:06:46.602 [2024-07-23 01:27:59.528292] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.602 [2024-07-23 01:27:59.528386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663308 ] 00:06:46.602 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.602 [2024-07-23 01:27:59.626145] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3663174 has claimed it. 00:06:46.602 [2024-07-23 01:27:59.626214] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:47.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3663308) - No such process 00:06:47.168 ERROR: process (pid: 3663308) is no longer running 00:06:47.168 01:28:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:47.168 01:28:00 -- common/autotest_common.sh@852 -- # return 1 00:06:47.168 01:28:00 -- common/autotest_common.sh@643 -- # es=1 00:06:47.168 01:28:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:47.168 01:28:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:47.168 01:28:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:47.168 01:28:00 -- event/cpu_locks.sh@122 -- # locks_exist 3663174 00:06:47.168 01:28:00 -- event/cpu_locks.sh@22 -- # lslocks -p 3663174 00:06:47.168 01:28:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.425 lslocks: write error 00:06:47.425 01:28:00 -- event/cpu_locks.sh@124 -- # killprocess 3663174 00:06:47.425 01:28:00 -- common/autotest_common.sh@926 -- # '[' -z 3663174 ']' 00:06:47.425 01:28:00 -- common/autotest_common.sh@930 -- # kill -0 3663174 00:06:47.425 01:28:00 -- common/autotest_common.sh@931 -- # uname 00:06:47.425 01:28:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:47.425 01:28:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3663174 00:06:47.683 01:28:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:47.683 01:28:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:47.683 01:28:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3663174' 00:06:47.683 killing process with pid 3663174 00:06:47.683 01:28:00 -- common/autotest_common.sh@945 -- # kill 3663174 00:06:47.683 01:28:00 -- common/autotest_common.sh@950 -- # wait 3663174 00:06:47.941 00:06:47.941 real 0m2.442s 00:06:47.941 user 0m2.788s 00:06:47.941 sys 0m0.668s 00:06:47.941 01:28:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.941 01:28:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.941 ************************************ 00:06:47.942 END TEST locking_app_on_locked_coremask 00:06:47.942 ************************************ 00:06:47.942 01:28:00 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.942 01:28:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.942 01:28:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.942 01:28:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.942 ************************************ 00:06:47.942 START TEST locking_overlapped_coremask 00:06:47.942 ************************************ 00:06:47.942 01:28:00 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:47.942 01:28:00 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3663484 00:06:47.942 01:28:00 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.942 01:28:00 -- event/cpu_locks.sh@133 -- # waitforlisten 3663484 /var/tmp/spdk.sock 00:06:47.942 01:28:00 -- common/autotest_common.sh@819 -- # '[' -z 3663484 ']' 00:06:47.942 01:28:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.942 01:28:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.942 01:28:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.942 01:28:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.942 01:28:00 -- common/autotest_common.sh@10 -- # set +x 00:06:47.942 [2024-07-23 01:28:01.006702] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.942 [2024-07-23 01:28:01.006789] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663484 ] 00:06:47.942 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.200 [2024-07-23 01:28:01.069993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.200 [2024-07-23 01:28:01.156885] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:48.200 [2024-07-23 01:28:01.157143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.200 [2024-07-23 01:28:01.157203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.200 [2024-07-23 01:28:01.157206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.131 01:28:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.131 01:28:01 -- common/autotest_common.sh@852 -- # return 0 00:06:49.131 01:28:01 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3663622 00:06:49.131 01:28:01 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3663622 /var/tmp/spdk2.sock 00:06:49.131 01:28:01 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:49.131 01:28:01 -- common/autotest_common.sh@640 -- # local es=0 00:06:49.131 01:28:01 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3663622 /var/tmp/spdk2.sock 00:06:49.131 01:28:01 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:49.131 01:28:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:49.131 01:28:01 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:49.131 01:28:01 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:49.131 01:28:01 -- common/autotest_common.sh@643 -- # waitforlisten 3663622 /var/tmp/spdk2.sock 00:06:49.131 01:28:01 -- common/autotest_common.sh@819 -- # '[' -z 3663622 ']' 00:06:49.131 01:28:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.131 01:28:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.131 01:28:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.131 01:28:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.131 01:28:01 -- common/autotest_common.sh@10 -- # set +x 00:06:49.131 [2024-07-23 01:28:01.967272] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:49.131 [2024-07-23 01:28:01.967370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663622 ] 00:06:49.131 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.131 [2024-07-23 01:28:02.055866] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3663484 has claimed it. 00:06:49.131 [2024-07-23 01:28:02.055925] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3663622) - No such process 00:06:49.695 ERROR: process (pid: 3663622) is no longer running 00:06:49.695 01:28:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:49.695 01:28:02 -- common/autotest_common.sh@852 -- # return 1 00:06:49.695 01:28:02 -- common/autotest_common.sh@643 -- # es=1 00:06:49.695 01:28:02 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:49.695 01:28:02 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:49.695 01:28:02 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:49.695 01:28:02 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.695 01:28:02 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.695 01:28:02 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.695 01:28:02 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.695 01:28:02 -- event/cpu_locks.sh@141 -- # killprocess 3663484 00:06:49.695 01:28:02 -- common/autotest_common.sh@926 -- # '[' -z 3663484 ']' 00:06:49.695 01:28:02 -- common/autotest_common.sh@930 -- # kill -0 3663484 00:06:49.695 01:28:02 -- common/autotest_common.sh@931 -- # uname 00:06:49.695 01:28:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.695 01:28:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3663484 00:06:49.695 01:28:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.695 01:28:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.695 01:28:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3663484' 00:06:49.695 killing process with pid 3663484 00:06:49.695 01:28:02 -- common/autotest_common.sh@945 -- # kill 3663484 00:06:49.695 01:28:02 -- common/autotest_common.sh@950 -- # wait 3663484 00:06:50.287 00:06:50.287 real 0m2.115s 00:06:50.287 user 0m6.067s 00:06:50.287 sys 0m0.454s 00:06:50.287 01:28:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.287 01:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.287 ************************************ 00:06:50.287 END TEST locking_overlapped_coremask 00:06:50.287 ************************************ 00:06:50.287 01:28:03 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:50.287 01:28:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:50.287 01:28:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.287 01:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.287 ************************************ 00:06:50.287 START TEST locking_overlapped_coremask_via_rpc 00:06:50.287 ************************************ 00:06:50.287 01:28:03 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:50.287 01:28:03 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3663789 00:06:50.287 01:28:03 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:50.287 01:28:03 -- event/cpu_locks.sh@149 -- # waitforlisten 3663789 /var/tmp/spdk.sock 00:06:50.287 01:28:03 -- common/autotest_common.sh@819 -- # '[' -z 3663789 ']' 00:06:50.287 01:28:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.287 01:28:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.287 01:28:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.287 01:28:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.287 01:28:03 -- common/autotest_common.sh@10 -- # set +x 00:06:50.287 [2024-07-23 01:28:03.142211] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:50.287 [2024-07-23 01:28:03.142289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663789 ] 00:06:50.287 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.287 [2024-07-23 01:28:03.204948] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.287 [2024-07-23 01:28:03.204993] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.287 [2024-07-23 01:28:03.298247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.287 [2024-07-23 01:28:03.298498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.287 [2024-07-23 01:28:03.298568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.287 [2024-07-23 01:28:03.298571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.222 01:28:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.222 01:28:04 -- common/autotest_common.sh@852 -- # return 0 00:06:51.222 01:28:04 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3663930 00:06:51.222 01:28:04 -- event/cpu_locks.sh@153 -- # waitforlisten 3663930 /var/tmp/spdk2.sock 00:06:51.222 01:28:04 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:51.222 01:28:04 -- common/autotest_common.sh@819 -- # '[' -z 3663930 ']' 00:06:51.222 01:28:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:51.222 01:28:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:51.222 01:28:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:51.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:51.222 01:28:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:51.222 01:28:04 -- common/autotest_common.sh@10 -- # set +x 00:06:51.222 [2024-07-23 01:28:04.154415] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.222 [2024-07-23 01:28:04.154511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3663930 ] 00:06:51.222 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.222 [2024-07-23 01:28:04.249173] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.222 [2024-07-23 01:28:04.249209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:51.480 [2024-07-23 01:28:04.419093] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:51.480 [2024-07-23 01:28:04.419324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.480 [2024-07-23 01:28:04.422654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:51.480 [2024-07-23 01:28:04.422656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.046 01:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.046 01:28:05 -- common/autotest_common.sh@852 -- # return 0 00:06:52.046 01:28:05 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:52.046 01:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.046 01:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.046 01:28:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:52.046 01:28:05 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.046 01:28:05 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.046 01:28:05 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.046 01:28:05 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:52.046 01:28:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.046 01:28:05 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:52.046 01:28:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.047 01:28:05 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:52.047 01:28:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:52.047 01:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.047 [2024-07-23 01:28:05.119725] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3663789 has claimed it. 00:06:52.047 request: 00:06:52.047 { 00:06:52.047 "method": "framework_enable_cpumask_locks", 00:06:52.047 "req_id": 1 00:06:52.047 } 00:06:52.047 Got JSON-RPC error response 00:06:52.047 response: 00:06:52.047 { 00:06:52.047 "code": -32603, 00:06:52.047 "message": "Failed to claim CPU core: 2" 00:06:52.047 } 00:06:52.047 01:28:05 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:52.047 01:28:05 -- common/autotest_common.sh@643 -- # es=1 00:06:52.047 01:28:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.047 01:28:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.047 01:28:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.047 01:28:05 -- event/cpu_locks.sh@158 -- # waitforlisten 3663789 /var/tmp/spdk.sock 00:06:52.047 01:28:05 -- common/autotest_common.sh@819 -- # '[' -z 3663789 ']' 00:06:52.047 01:28:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.047 01:28:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.047 01:28:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.047 01:28:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.047 01:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.304 01:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.304 01:28:05 -- common/autotest_common.sh@852 -- # return 0 00:06:52.305 01:28:05 -- event/cpu_locks.sh@159 -- # waitforlisten 3663930 /var/tmp/spdk2.sock 00:06:52.305 01:28:05 -- common/autotest_common.sh@819 -- # '[' -z 3663930 ']' 00:06:52.305 01:28:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.305 01:28:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.305 01:28:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.305 01:28:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.305 01:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 01:28:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:52.562 01:28:05 -- common/autotest_common.sh@852 -- # return 0 00:06:52.562 01:28:05 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:52.562 01:28:05 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:52.562 01:28:05 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:52.562 01:28:05 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:52.562 00:06:52.562 real 0m2.525s 00:06:52.562 user 0m1.226s 00:06:52.562 sys 0m0.221s 00:06:52.562 01:28:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.562 01:28:05 -- common/autotest_common.sh@10 -- # set +x 00:06:52.562 ************************************ 00:06:52.562 END TEST locking_overlapped_coremask_via_rpc 00:06:52.562 ************************************ 00:06:52.562 01:28:05 -- event/cpu_locks.sh@174 -- # cleanup 00:06:52.562 01:28:05 -- event/cpu_locks.sh@15 -- # [[ -z 3663789 ]] 00:06:52.562 01:28:05 -- event/cpu_locks.sh@15 -- # killprocess 3663789 00:06:52.562 01:28:05 -- common/autotest_common.sh@926 -- # '[' -z 3663789 ']' 00:06:52.562 01:28:05 -- common/autotest_common.sh@930 -- # kill -0 3663789 00:06:52.562 01:28:05 -- common/autotest_common.sh@931 -- # uname 00:06:52.562 01:28:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:52.562 01:28:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3663789 00:06:52.820 01:28:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:52.821 01:28:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:52.821 01:28:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3663789' 00:06:52.821 killing process with pid 3663789 00:06:52.821 01:28:05 -- common/autotest_common.sh@945 -- # kill 3663789 00:06:52.821 01:28:05 -- common/autotest_common.sh@950 -- # wait 3663789 00:06:53.079 01:28:06 -- event/cpu_locks.sh@16 -- # [[ -z 3663930 ]] 00:06:53.079 01:28:06 -- event/cpu_locks.sh@16 -- # killprocess 3663930 00:06:53.079 01:28:06 -- common/autotest_common.sh@926 -- # '[' -z 3663930 ']' 00:06:53.079 01:28:06 -- common/autotest_common.sh@930 -- # kill -0 3663930 00:06:53.079 01:28:06 -- common/autotest_common.sh@931 -- # uname 00:06:53.079 01:28:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.079 01:28:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3663930 00:06:53.079 01:28:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:53.079 01:28:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:53.079 01:28:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3663930' 00:06:53.079 killing process with pid 3663930 00:06:53.079 01:28:06 -- common/autotest_common.sh@945 -- # kill 3663930 00:06:53.079 01:28:06 -- common/autotest_common.sh@950 -- # wait 3663930 00:06:53.644 01:28:06 -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.644 01:28:06 -- event/cpu_locks.sh@1 -- # cleanup 00:06:53.644 01:28:06 -- event/cpu_locks.sh@15 -- # [[ -z 3663789 ]] 00:06:53.644 01:28:06 -- event/cpu_locks.sh@15 -- # killprocess 3663789 00:06:53.644 01:28:06 -- common/autotest_common.sh@926 -- # '[' -z 3663789 ']' 00:06:53.644 01:28:06 -- common/autotest_common.sh@930 -- # kill -0 3663789 00:06:53.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3663789) - No such process 00:06:53.644 01:28:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3663789 is not found' 00:06:53.644 Process with pid 3663789 is not found 00:06:53.644 01:28:06 -- event/cpu_locks.sh@16 -- # [[ -z 3663930 ]] 00:06:53.644 01:28:06 -- event/cpu_locks.sh@16 -- # killprocess 3663930 00:06:53.644 01:28:06 -- common/autotest_common.sh@926 -- # '[' -z 3663930 ']' 00:06:53.644 01:28:06 -- common/autotest_common.sh@930 -- # kill -0 3663930 00:06:53.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3663930) - No such process 00:06:53.644 01:28:06 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3663930 is not found' 00:06:53.644 Process with pid 3663930 is not found 00:06:53.644 01:28:06 -- event/cpu_locks.sh@18 -- # rm -f 00:06:53.644 00:06:53.644 real 0m18.942s 00:06:53.644 user 0m34.131s 00:06:53.644 sys 0m5.434s 00:06:53.644 01:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.644 01:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.644 ************************************ 00:06:53.644 END TEST cpu_locks 00:06:53.644 ************************************ 00:06:53.644 00:06:53.644 real 0m45.016s 00:06:53.644 user 1m26.398s 00:06:53.644 sys 0m9.321s 00:06:53.645 01:28:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.645 01:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.645 ************************************ 00:06:53.645 END TEST event 00:06:53.645 ************************************ 00:06:53.645 01:28:06 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.645 01:28:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:53.645 01:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.645 01:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.645 ************************************ 00:06:53.645 START TEST thread 00:06:53.645 ************************************ 00:06:53.645 01:28:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:53.645 * Looking for test storage... 00:06:53.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:53.645 01:28:06 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.645 01:28:06 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:53.645 01:28:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.645 01:28:06 -- common/autotest_common.sh@10 -- # set +x 00:06:53.645 ************************************ 00:06:53.645 START TEST thread_poller_perf 00:06:53.645 ************************************ 00:06:53.645 01:28:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:53.645 [2024-07-23 01:28:06.608204] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.645 [2024-07-23 01:28:06.608272] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664306 ] 00:06:53.645 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.645 [2024-07-23 01:28:06.665588] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.901 [2024-07-23 01:28:06.752917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.901 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:54.831 ====================================== 00:06:54.831 busy:2712198766 (cyc) 00:06:54.831 total_run_count: 282000 00:06:54.831 tsc_hz: 2700000000 (cyc) 00:06:54.831 ====================================== 00:06:54.831 poller_cost: 9617 (cyc), 3561 (nsec) 00:06:54.831 00:06:54.831 real 0m1.246s 00:06:54.831 user 0m1.162s 00:06:54.831 sys 0m0.078s 00:06:54.831 01:28:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.831 01:28:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.831 ************************************ 00:06:54.831 END TEST thread_poller_perf 00:06:54.831 ************************************ 00:06:54.831 01:28:07 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.831 01:28:07 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:54.831 01:28:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.831 01:28:07 -- common/autotest_common.sh@10 -- # set +x 00:06:54.831 ************************************ 00:06:54.831 START TEST thread_poller_perf 00:06:54.831 ************************************ 00:06:54.831 01:28:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:54.831 [2024-07-23 01:28:07.881383] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:54.831 [2024-07-23 01:28:07.881465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664462 ] 00:06:54.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.088 [2024-07-23 01:28:07.942114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.088 [2024-07-23 01:28:08.033503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.088 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:56.018 ====================================== 00:06:56.018 busy:2703251322 (cyc) 00:06:56.018 total_run_count: 3852000 00:06:56.018 tsc_hz: 2700000000 (cyc) 00:06:56.018 ====================================== 00:06:56.018 poller_cost: 701 (cyc), 259 (nsec) 00:06:56.018 00:06:56.018 real 0m1.249s 00:06:56.018 user 0m1.162s 00:06:56.018 sys 0m0.081s 00:06:56.018 01:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.018 01:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.277 ************************************ 00:06:56.277 END TEST thread_poller_perf 00:06:56.277 ************************************ 00:06:56.277 01:28:09 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:56.277 00:06:56.277 real 0m2.589s 00:06:56.277 user 0m2.362s 00:06:56.277 sys 0m0.227s 00:06:56.277 01:28:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.277 01:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.277 ************************************ 00:06:56.277 END TEST thread 00:06:56.277 ************************************ 00:06:56.277 01:28:09 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.277 01:28:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:56.277 01:28:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.277 01:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.277 ************************************ 00:06:56.277 START TEST accel 00:06:56.277 ************************************ 00:06:56.277 01:28:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:56.277 * Looking for test storage... 00:06:56.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:56.277 01:28:09 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:56.277 01:28:09 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:56.277 01:28:09 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:56.277 01:28:09 -- accel/accel.sh@59 -- # spdk_tgt_pid=3664654 00:06:56.277 01:28:09 -- accel/accel.sh@60 -- # waitforlisten 3664654 00:06:56.277 01:28:09 -- common/autotest_common.sh@819 -- # '[' -z 3664654 ']' 00:06:56.277 01:28:09 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:56.277 01:28:09 -- accel/accel.sh@58 -- # build_accel_config 00:06:56.277 01:28:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.277 01:28:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:56.277 01:28:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.277 01:28:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.277 01:28:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.277 01:28:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:56.277 01:28:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.277 01:28:09 -- common/autotest_common.sh@10 -- # set +x 00:06:56.277 01:28:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.277 01:28:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.277 01:28:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.277 01:28:09 -- accel/accel.sh@42 -- # jq -r . 00:06:56.277 [2024-07-23 01:28:09.260010] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:56.277 [2024-07-23 01:28:09.260099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664654 ] 00:06:56.277 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.277 [2024-07-23 01:28:09.321146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.546 [2024-07-23 01:28:09.408957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:56.546 [2024-07-23 01:28:09.409122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.109 01:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:57.109 01:28:10 -- common/autotest_common.sh@852 -- # return 0 00:06:57.109 01:28:10 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:57.109 01:28:10 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:57.109 01:28:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:57.109 01:28:10 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:57.109 01:28:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.109 01:28:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # IFS== 00:06:57.367 01:28:10 -- accel/accel.sh@64 -- # read -r opc module 00:06:57.367 01:28:10 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:57.367 01:28:10 -- accel/accel.sh@67 -- # killprocess 3664654 00:06:57.367 01:28:10 -- common/autotest_common.sh@926 -- # '[' -z 3664654 ']' 00:06:57.367 01:28:10 -- common/autotest_common.sh@930 -- # kill -0 3664654 00:06:57.367 01:28:10 -- common/autotest_common.sh@931 -- # uname 00:06:57.367 01:28:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:57.367 01:28:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3664654 00:06:57.367 01:28:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:57.367 01:28:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:57.367 01:28:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3664654' 00:06:57.367 killing process with pid 3664654 00:06:57.367 01:28:10 -- common/autotest_common.sh@945 -- # kill 3664654 00:06:57.367 01:28:10 -- common/autotest_common.sh@950 -- # wait 3664654 00:06:57.625 01:28:10 -- accel/accel.sh@68 -- # trap - ERR 00:06:57.625 01:28:10 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:57.625 01:28:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:57.625 01:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.625 01:28:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.625 01:28:10 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:57.625 01:28:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:57.625 01:28:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.625 01:28:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.625 01:28:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.625 01:28:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.625 01:28:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.625 01:28:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.625 01:28:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.625 01:28:10 -- accel/accel.sh@42 -- # jq -r . 00:06:57.625 01:28:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.625 01:28:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.883 01:28:10 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:57.883 01:28:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:57.883 01:28:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:57.883 01:28:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.883 ************************************ 00:06:57.883 START TEST accel_missing_filename 00:06:57.883 ************************************ 00:06:57.883 01:28:10 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:57.883 01:28:10 -- common/autotest_common.sh@640 -- # local es=0 00:06:57.883 01:28:10 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:57.883 01:28:10 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:57.883 01:28:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.883 01:28:10 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:57.883 01:28:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:57.883 01:28:10 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:57.883 01:28:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:57.883 01:28:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.883 01:28:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.883 01:28:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.883 01:28:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.883 01:28:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.883 01:28:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.883 01:28:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.883 01:28:10 -- accel/accel.sh@42 -- # jq -r . 00:06:57.883 [2024-07-23 01:28:10.748401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:57.883 [2024-07-23 01:28:10.748490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664951 ] 00:06:57.883 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.883 [2024-07-23 01:28:10.812089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.883 [2024-07-23 01:28:10.904018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.883 [2024-07-23 01:28:10.964837] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.141 [2024-07-23 01:28:11.041457] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:58.141 A filename is required. 00:06:58.141 01:28:11 -- common/autotest_common.sh@643 -- # es=234 00:06:58.141 01:28:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.141 01:28:11 -- common/autotest_common.sh@652 -- # es=106 00:06:58.141 01:28:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.141 01:28:11 -- common/autotest_common.sh@660 -- # es=1 00:06:58.141 01:28:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.141 00:06:58.141 real 0m0.392s 00:06:58.141 user 0m0.280s 00:06:58.141 sys 0m0.145s 00:06:58.141 01:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.141 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.141 ************************************ 00:06:58.141 END TEST accel_missing_filename 00:06:58.141 ************************************ 00:06:58.141 01:28:11 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 01:28:11 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:58.141 01:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.141 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.141 ************************************ 00:06:58.141 START TEST accel_compress_verify 00:06:58.141 ************************************ 00:06:58.141 01:28:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 01:28:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.141 01:28:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 01:28:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.141 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.141 01:28:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.141 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.141 01:28:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 01:28:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:58.141 01:28:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.141 01:28:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.141 01:28:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.141 01:28:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.141 01:28:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.141 01:28:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.141 01:28:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.141 01:28:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.141 [2024-07-23 01:28:11.167300] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.141 [2024-07-23 01:28:11.167378] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664979 ] 00:06:58.141 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.141 [2024-07-23 01:28:11.231091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.399 [2024-07-23 01:28:11.321334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.399 [2024-07-23 01:28:11.383594] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:58.399 [2024-07-23 01:28:11.468341] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:58.656 00:06:58.656 Compression does not support the verify option, aborting. 00:06:58.656 01:28:11 -- common/autotest_common.sh@643 -- # es=161 00:06:58.656 01:28:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.656 01:28:11 -- common/autotest_common.sh@652 -- # es=33 00:06:58.656 01:28:11 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:58.656 01:28:11 -- common/autotest_common.sh@660 -- # es=1 00:06:58.656 01:28:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.656 00:06:58.656 real 0m0.402s 00:06:58.656 user 0m0.290s 00:06:58.656 sys 0m0.147s 00:06:58.656 01:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.656 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 ************************************ 00:06:58.656 END TEST accel_compress_verify 00:06:58.656 ************************************ 00:06:58.656 01:28:11 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:58.656 01:28:11 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.656 01:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.656 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.656 ************************************ 00:06:58.656 START TEST accel_wrong_workload 00:06:58.656 ************************************ 00:06:58.656 01:28:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:58.656 01:28:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.656 01:28:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:58.656 01:28:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.656 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.656 01:28:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.656 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.656 01:28:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:58.656 01:28:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:58.656 01:28:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.656 01:28:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.656 01:28:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.656 01:28:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.656 01:28:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.656 01:28:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.656 01:28:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.656 01:28:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.656 Unsupported workload type: foobar 00:06:58.656 [2024-07-23 01:28:11.593237] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:58.656 accel_perf options: 00:06:58.656 [-h help message] 00:06:58.656 [-q queue depth per core] 00:06:58.656 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.656 [-T number of threads per core 00:06:58.656 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.656 [-t time in seconds] 00:06:58.656 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.656 [ dif_verify, , dif_generate, dif_generate_copy 00:06:58.656 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.656 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.656 [-S for crc32c workload, use this seed value (default 0) 00:06:58.656 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.657 [-f for fill workload, use this BYTE value (default 255) 00:06:58.657 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.657 [-y verify result if this switch is on] 00:06:58.657 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.657 Can be used to spread operations across a wider range of memory. 00:06:58.657 01:28:11 -- common/autotest_common.sh@643 -- # es=1 00:06:58.657 01:28:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.657 01:28:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:58.657 01:28:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.657 00:06:58.657 real 0m0.020s 00:06:58.657 user 0m0.010s 00:06:58.657 sys 0m0.010s 00:06:58.657 01:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.657 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.657 ************************************ 00:06:58.657 END TEST accel_wrong_workload 00:06:58.657 ************************************ 00:06:58.657 Error: writing output failed: Broken pipe 00:06:58.657 01:28:11 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.657 01:28:11 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:58.657 01:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.657 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.657 ************************************ 00:06:58.657 START TEST accel_negative_buffers 00:06:58.657 ************************************ 00:06:58.657 01:28:11 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:58.657 01:28:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:58.657 01:28:11 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:58.657 01:28:11 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:58.657 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.657 01:28:11 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:58.657 01:28:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:58.657 01:28:11 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:58.657 01:28:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:58.657 01:28:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.657 01:28:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.657 01:28:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.657 01:28:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.657 01:28:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.657 -x option must be non-negative. 00:06:58.657 [2024-07-23 01:28:11.639366] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:58.657 accel_perf options: 00:06:58.657 [-h help message] 00:06:58.657 [-q queue depth per core] 00:06:58.657 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:58.657 [-T number of threads per core 00:06:58.657 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:58.657 [-t time in seconds] 00:06:58.657 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:58.657 [ dif_verify, , dif_generate, dif_generate_copy 00:06:58.657 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:58.657 [-l for compress/decompress workloads, name of uncompressed input file 00:06:58.657 [-S for crc32c workload, use this seed value (default 0) 00:06:58.657 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:58.657 [-f for fill workload, use this BYTE value (default 255) 00:06:58.657 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:58.657 [-y verify result if this switch is on] 00:06:58.657 [-a tasks to allocate per core (default: same value as -q)] 00:06:58.657 Can be used to spread operations across a wider range of memory. 00:06:58.657 01:28:11 -- common/autotest_common.sh@643 -- # es=1 00:06:58.657 01:28:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:58.657 01:28:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:58.657 01:28:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:58.657 00:06:58.657 real 0m0.021s 00:06:58.657 user 0m0.011s 00:06:58.657 sys 0m0.010s 00:06:58.657 01:28:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.657 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.657 ************************************ 00:06:58.657 END TEST accel_negative_buffers 00:06:58.657 ************************************ 00:06:58.657 Error: writing output failed: Broken pipe 00:06:58.657 01:28:11 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:58.657 01:28:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:58.657 01:28:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.657 01:28:11 -- common/autotest_common.sh@10 -- # set +x 00:06:58.657 ************************************ 00:06:58.657 START TEST accel_crc32c 00:06:58.657 ************************************ 00:06:58.657 01:28:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:58.657 01:28:11 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.657 01:28:11 -- accel/accel.sh@17 -- # local accel_module 00:06:58.657 01:28:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:58.657 01:28:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:58.657 01:28:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.657 01:28:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.657 01:28:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.657 01:28:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.657 01:28:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.657 01:28:11 -- accel/accel.sh@42 -- # jq -r . 00:06:58.657 [2024-07-23 01:28:11.686196] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:58.657 [2024-07-23 01:28:11.686267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665072 ] 00:06:58.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.657 [2024-07-23 01:28:11.751845] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.915 [2024-07-23 01:28:11.844064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.288 01:28:13 -- accel/accel.sh@18 -- # out=' 00:07:00.288 SPDK Configuration: 00:07:00.288 Core mask: 0x1 00:07:00.288 00:07:00.288 Accel Perf Configuration: 00:07:00.288 Workload Type: crc32c 00:07:00.288 CRC-32C seed: 32 00:07:00.288 Transfer size: 4096 bytes 00:07:00.288 Vector count 1 00:07:00.288 Module: software 00:07:00.288 Queue depth: 32 00:07:00.288 Allocate depth: 32 00:07:00.288 # threads/core: 1 00:07:00.288 Run time: 1 seconds 00:07:00.288 Verify: Yes 00:07:00.288 00:07:00.288 Running for 1 seconds... 00:07:00.288 00:07:00.288 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.288 ------------------------------------------------------------------------------------ 00:07:00.288 0,0 406048/s 1586 MiB/s 0 0 00:07:00.288 ==================================================================================== 00:07:00.288 Total 406048/s 1586 MiB/s 0 0' 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.288 01:28:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.288 01:28:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.288 01:28:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.288 01:28:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.288 01:28:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.288 01:28:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.288 01:28:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.288 01:28:13 -- accel/accel.sh@42 -- # jq -r . 00:07:00.288 [2024-07-23 01:28:13.093827] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.288 [2024-07-23 01:28:13.093911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665300 ] 00:07:00.288 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.288 [2024-07-23 01:28:13.157209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.288 [2024-07-23 01:28:13.247373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.288 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.288 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val=0x1 00:07:00.288 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.288 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.288 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.288 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.288 01:28:13 -- accel/accel.sh@21 -- # val=crc32c 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=32 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=software 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=32 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=32 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=1 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val=Yes 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:00.289 01:28:13 -- accel/accel.sh@21 -- # val= 00:07:00.289 01:28:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # IFS=: 00:07:00.289 01:28:13 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@21 -- # val= 00:07:01.662 01:28:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # IFS=: 00:07:01.662 01:28:14 -- accel/accel.sh@20 -- # read -r var val 00:07:01.662 01:28:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.662 01:28:14 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:01.662 01:28:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.662 00:07:01.662 real 0m2.797s 00:07:01.662 user 0m2.499s 00:07:01.662 sys 0m0.290s 00:07:01.662 01:28:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.662 01:28:14 -- common/autotest_common.sh@10 -- # set +x 00:07:01.662 ************************************ 00:07:01.662 END TEST accel_crc32c 00:07:01.662 ************************************ 00:07:01.662 01:28:14 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:01.662 01:28:14 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:01.662 01:28:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.662 01:28:14 -- common/autotest_common.sh@10 -- # set +x 00:07:01.662 ************************************ 00:07:01.662 START TEST accel_crc32c_C2 00:07:01.662 ************************************ 00:07:01.662 01:28:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:01.662 01:28:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.662 01:28:14 -- accel/accel.sh@17 -- # local accel_module 00:07:01.662 01:28:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:01.662 01:28:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:01.662 01:28:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.662 01:28:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.662 01:28:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.662 01:28:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.662 01:28:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.662 01:28:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.662 01:28:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.662 01:28:14 -- accel/accel.sh@42 -- # jq -r . 00:07:01.662 [2024-07-23 01:28:14.504560] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:01.662 [2024-07-23 01:28:14.504651] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665466 ] 00:07:01.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.662 [2024-07-23 01:28:14.565553] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.662 [2024-07-23 01:28:14.654600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.035 01:28:15 -- accel/accel.sh@18 -- # out=' 00:07:03.035 SPDK Configuration: 00:07:03.035 Core mask: 0x1 00:07:03.035 00:07:03.035 Accel Perf Configuration: 00:07:03.035 Workload Type: crc32c 00:07:03.035 CRC-32C seed: 0 00:07:03.035 Transfer size: 4096 bytes 00:07:03.035 Vector count 2 00:07:03.035 Module: software 00:07:03.035 Queue depth: 32 00:07:03.035 Allocate depth: 32 00:07:03.035 # threads/core: 1 00:07:03.036 Run time: 1 seconds 00:07:03.036 Verify: Yes 00:07:03.036 00:07:03.036 Running for 1 seconds... 00:07:03.036 00:07:03.036 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.036 ------------------------------------------------------------------------------------ 00:07:03.036 0,0 315488/s 2464 MiB/s 0 0 00:07:03.036 ==================================================================================== 00:07:03.036 Total 315488/s 1232 MiB/s 0 0' 00:07:03.036 01:28:15 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:03.036 01:28:15 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:03.036 01:28:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.036 01:28:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.036 01:28:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.036 01:28:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.036 01:28:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.036 01:28:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.036 01:28:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.036 01:28:15 -- accel/accel.sh@42 -- # jq -r . 00:07:03.036 [2024-07-23 01:28:15.897758] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.036 [2024-07-23 01:28:15.897841] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665609 ] 00:07:03.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.036 [2024-07-23 01:28:15.960676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.036 [2024-07-23 01:28:16.051362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=0x1 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=crc32c 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=0 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=software 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=32 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=32 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=1 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val=Yes 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:03.036 01:28:16 -- accel/accel.sh@21 -- # val= 00:07:03.036 01:28:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # IFS=: 00:07:03.036 01:28:16 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@21 -- # val= 00:07:04.409 01:28:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # IFS=: 00:07:04.409 01:28:17 -- accel/accel.sh@20 -- # read -r var val 00:07:04.409 01:28:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.409 01:28:17 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:04.409 01:28:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.409 00:07:04.409 real 0m2.795s 00:07:04.409 user 0m2.496s 00:07:04.409 sys 0m0.291s 00:07:04.409 01:28:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.409 01:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:04.409 ************************************ 00:07:04.409 END TEST accel_crc32c_C2 00:07:04.409 ************************************ 00:07:04.409 01:28:17 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:04.409 01:28:17 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.409 01:28:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.409 01:28:17 -- common/autotest_common.sh@10 -- # set +x 00:07:04.409 ************************************ 00:07:04.409 START TEST accel_copy 00:07:04.409 ************************************ 00:07:04.409 01:28:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:04.409 01:28:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.409 01:28:17 -- accel/accel.sh@17 -- # local accel_module 00:07:04.409 01:28:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:04.409 01:28:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:04.409 01:28:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.409 01:28:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.409 01:28:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.409 01:28:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.409 01:28:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.409 01:28:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.409 01:28:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.409 01:28:17 -- accel/accel.sh@42 -- # jq -r . 00:07:04.409 [2024-07-23 01:28:17.329351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.409 [2024-07-23 01:28:17.329437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3665773 ] 00:07:04.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.409 [2024-07-23 01:28:17.390991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.409 [2024-07-23 01:28:17.481976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.783 01:28:18 -- accel/accel.sh@18 -- # out=' 00:07:05.783 SPDK Configuration: 00:07:05.783 Core mask: 0x1 00:07:05.783 00:07:05.783 Accel Perf Configuration: 00:07:05.783 Workload Type: copy 00:07:05.783 Transfer size: 4096 bytes 00:07:05.783 Vector count 1 00:07:05.783 Module: software 00:07:05.783 Queue depth: 32 00:07:05.783 Allocate depth: 32 00:07:05.783 # threads/core: 1 00:07:05.783 Run time: 1 seconds 00:07:05.783 Verify: Yes 00:07:05.783 00:07:05.783 Running for 1 seconds... 00:07:05.783 00:07:05.783 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.783 ------------------------------------------------------------------------------------ 00:07:05.783 0,0 276128/s 1078 MiB/s 0 0 00:07:05.783 ==================================================================================== 00:07:05.783 Total 276128/s 1078 MiB/s 0 0' 00:07:05.783 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:05.783 01:28:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:05.783 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:05.783 01:28:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:05.783 01:28:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.783 01:28:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.783 01:28:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.783 01:28:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.783 01:28:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.783 01:28:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.783 01:28:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.783 01:28:18 -- accel/accel.sh@42 -- # jq -r . 00:07:05.783 [2024-07-23 01:28:18.732472] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:05.783 [2024-07-23 01:28:18.732552] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666025 ] 00:07:05.783 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.783 [2024-07-23 01:28:18.793295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.041 [2024-07-23 01:28:18.884115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=0x1 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=copy 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=software 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=32 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=32 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=1 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val=Yes 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:06.041 01:28:18 -- accel/accel.sh@21 -- # val= 00:07:06.041 01:28:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # IFS=: 00:07:06.041 01:28:18 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.003 01:28:20 -- accel/accel.sh@21 -- # val= 00:07:07.003 01:28:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.003 01:28:20 -- accel/accel.sh@20 -- # IFS=: 00:07:07.261 01:28:20 -- accel/accel.sh@20 -- # read -r var val 00:07:07.261 01:28:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.261 01:28:20 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:07.261 01:28:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.261 00:07:07.261 real 0m2.792s 00:07:07.261 user 0m2.491s 00:07:07.261 sys 0m0.293s 00:07:07.261 01:28:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.261 01:28:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.261 ************************************ 00:07:07.261 END TEST accel_copy 00:07:07.261 ************************************ 00:07:07.261 01:28:20 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.261 01:28:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:07.261 01:28:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.262 01:28:20 -- common/autotest_common.sh@10 -- # set +x 00:07:07.262 ************************************ 00:07:07.262 START TEST accel_fill 00:07:07.262 ************************************ 00:07:07.262 01:28:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.262 01:28:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.262 01:28:20 -- accel/accel.sh@17 -- # local accel_module 00:07:07.262 01:28:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.262 01:28:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:07.262 01:28:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.262 01:28:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.262 01:28:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.262 01:28:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.262 01:28:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.262 01:28:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.262 01:28:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.262 01:28:20 -- accel/accel.sh@42 -- # jq -r . 00:07:07.262 [2024-07-23 01:28:20.145177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.262 [2024-07-23 01:28:20.145253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666187 ] 00:07:07.262 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.262 [2024-07-23 01:28:20.206667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.262 [2024-07-23 01:28:20.296267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.636 01:28:21 -- accel/accel.sh@18 -- # out=' 00:07:08.636 SPDK Configuration: 00:07:08.636 Core mask: 0x1 00:07:08.636 00:07:08.636 Accel Perf Configuration: 00:07:08.636 Workload Type: fill 00:07:08.636 Fill pattern: 0x80 00:07:08.636 Transfer size: 4096 bytes 00:07:08.636 Vector count 1 00:07:08.636 Module: software 00:07:08.636 Queue depth: 64 00:07:08.636 Allocate depth: 64 00:07:08.636 # threads/core: 1 00:07:08.636 Run time: 1 seconds 00:07:08.636 Verify: Yes 00:07:08.636 00:07:08.636 Running for 1 seconds... 00:07:08.636 00:07:08.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.636 ------------------------------------------------------------------------------------ 00:07:08.636 0,0 405376/s 1583 MiB/s 0 0 00:07:08.636 ==================================================================================== 00:07:08.636 Total 405376/s 1583 MiB/s 0 0' 00:07:08.636 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.636 01:28:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.636 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.636 01:28:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:08.636 01:28:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.636 01:28:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.636 01:28:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.636 01:28:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.636 01:28:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.636 01:28:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.636 01:28:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.636 01:28:21 -- accel/accel.sh@42 -- # jq -r . 00:07:08.636 [2024-07-23 01:28:21.540132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.636 [2024-07-23 01:28:21.540213] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666335 ] 00:07:08.636 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.636 [2024-07-23 01:28:21.607658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.636 [2024-07-23 01:28:21.698093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=0x1 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=fill 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=0x80 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=software 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=64 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=64 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=1 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val=Yes 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:08.895 01:28:21 -- accel/accel.sh@21 -- # val= 00:07:08.895 01:28:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # IFS=: 00:07:08.895 01:28:21 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@21 -- # val= 00:07:10.270 01:28:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # IFS=: 00:07:10.270 01:28:22 -- accel/accel.sh@20 -- # read -r var val 00:07:10.270 01:28:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.270 01:28:22 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:10.270 01:28:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.270 00:07:10.270 real 0m2.808s 00:07:10.270 user 0m2.513s 00:07:10.270 sys 0m0.287s 00:07:10.270 01:28:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.270 01:28:22 -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 END TEST accel_fill 00:07:10.270 ************************************ 00:07:10.270 01:28:22 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:10.270 01:28:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:10.270 01:28:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.270 01:28:22 -- common/autotest_common.sh@10 -- # set +x 00:07:10.270 ************************************ 00:07:10.270 START TEST accel_copy_crc32c 00:07:10.270 ************************************ 00:07:10.270 01:28:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:10.270 01:28:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.270 01:28:22 -- accel/accel.sh@17 -- # local accel_module 00:07:10.270 01:28:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:10.270 01:28:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:10.270 01:28:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.270 01:28:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.270 01:28:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.270 01:28:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.271 01:28:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.271 01:28:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.271 01:28:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.271 01:28:22 -- accel/accel.sh@42 -- # jq -r . 00:07:10.271 [2024-07-23 01:28:22.977599] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.271 [2024-07-23 01:28:22.977705] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666491 ] 00:07:10.271 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.271 [2024-07-23 01:28:23.040325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.271 [2024-07-23 01:28:23.129974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.643 01:28:24 -- accel/accel.sh@18 -- # out=' 00:07:11.643 SPDK Configuration: 00:07:11.643 Core mask: 0x1 00:07:11.643 00:07:11.643 Accel Perf Configuration: 00:07:11.643 Workload Type: copy_crc32c 00:07:11.643 CRC-32C seed: 0 00:07:11.643 Vector size: 4096 bytes 00:07:11.643 Transfer size: 4096 bytes 00:07:11.643 Vector count 1 00:07:11.643 Module: software 00:07:11.643 Queue depth: 32 00:07:11.643 Allocate depth: 32 00:07:11.643 # threads/core: 1 00:07:11.643 Run time: 1 seconds 00:07:11.643 Verify: Yes 00:07:11.643 00:07:11.643 Running for 1 seconds... 00:07:11.643 00:07:11.643 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.643 ------------------------------------------------------------------------------------ 00:07:11.643 0,0 217152/s 848 MiB/s 0 0 00:07:11.643 ==================================================================================== 00:07:11.643 Total 217152/s 848 MiB/s 0 0' 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.643 01:28:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.643 01:28:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.643 01:28:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.643 01:28:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.643 01:28:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.643 01:28:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.643 01:28:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.643 01:28:24 -- accel/accel.sh@42 -- # jq -r . 00:07:11.643 [2024-07-23 01:28:24.372249] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:11.643 [2024-07-23 01:28:24.372318] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666748 ] 00:07:11.643 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.643 [2024-07-23 01:28:24.434398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.643 [2024-07-23 01:28:24.524494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val=0x1 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val=0 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.643 01:28:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.643 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.643 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val=software 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val=32 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val=32 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val=1 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val=Yes 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:11.644 01:28:24 -- accel/accel.sh@21 -- # val= 00:07:11.644 01:28:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # IFS=: 00:07:11.644 01:28:24 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@21 -- # val= 00:07:13.018 01:28:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # IFS=: 00:07:13.018 01:28:25 -- accel/accel.sh@20 -- # read -r var val 00:07:13.018 01:28:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:13.018 01:28:25 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:13.018 01:28:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:13.018 00:07:13.018 real 0m2.788s 00:07:13.018 user 0m2.499s 00:07:13.018 sys 0m0.282s 00:07:13.018 01:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.018 01:28:25 -- common/autotest_common.sh@10 -- # set +x 00:07:13.018 ************************************ 00:07:13.018 END TEST accel_copy_crc32c 00:07:13.018 ************************************ 00:07:13.018 01:28:25 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.018 01:28:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:13.018 01:28:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.018 01:28:25 -- common/autotest_common.sh@10 -- # set +x 00:07:13.018 ************************************ 00:07:13.018 START TEST accel_copy_crc32c_C2 00:07:13.018 ************************************ 00:07:13.018 01:28:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:13.018 01:28:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:13.018 01:28:25 -- accel/accel.sh@17 -- # local accel_module 00:07:13.018 01:28:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:13.018 01:28:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:13.018 01:28:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.018 01:28:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.018 01:28:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.018 01:28:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.018 01:28:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.018 01:28:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.018 01:28:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.018 01:28:25 -- accel/accel.sh@42 -- # jq -r . 00:07:13.018 [2024-07-23 01:28:25.790070] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:13.018 [2024-07-23 01:28:25.790156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3666914 ] 00:07:13.018 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.018 [2024-07-23 01:28:25.851229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.018 [2024-07-23 01:28:25.942168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.391 01:28:27 -- accel/accel.sh@18 -- # out=' 00:07:14.391 SPDK Configuration: 00:07:14.391 Core mask: 0x1 00:07:14.391 00:07:14.391 Accel Perf Configuration: 00:07:14.391 Workload Type: copy_crc32c 00:07:14.391 CRC-32C seed: 0 00:07:14.391 Vector size: 4096 bytes 00:07:14.391 Transfer size: 8192 bytes 00:07:14.391 Vector count 2 00:07:14.391 Module: software 00:07:14.391 Queue depth: 32 00:07:14.391 Allocate depth: 32 00:07:14.391 # threads/core: 1 00:07:14.391 Run time: 1 seconds 00:07:14.391 Verify: Yes 00:07:14.391 00:07:14.391 Running for 1 seconds... 00:07:14.391 00:07:14.391 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.391 ------------------------------------------------------------------------------------ 00:07:14.391 0,0 155360/s 1213 MiB/s 0 0 00:07:14.391 ==================================================================================== 00:07:14.391 Total 155360/s 606 MiB/s 0 0' 00:07:14.391 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.391 01:28:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.391 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.391 01:28:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.391 01:28:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.391 01:28:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.391 01:28:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.391 01:28:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.391 01:28:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.391 01:28:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.391 01:28:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.391 01:28:27 -- accel/accel.sh@42 -- # jq -r . 00:07:14.391 [2024-07-23 01:28:27.192753] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:14.391 [2024-07-23 01:28:27.192827] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667052 ] 00:07:14.391 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.391 [2024-07-23 01:28:27.257318] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.391 [2024-07-23 01:28:27.346544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=0x1 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=0 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=software 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=32 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=32 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=1 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val=Yes 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:14.392 01:28:27 -- accel/accel.sh@21 -- # val= 00:07:14.392 01:28:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # IFS=: 00:07:14.392 01:28:27 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@21 -- # val= 00:07:15.765 01:28:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # IFS=: 00:07:15.765 01:28:28 -- accel/accel.sh@20 -- # read -r var val 00:07:15.765 01:28:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.765 01:28:28 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:15.765 01:28:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.765 00:07:15.765 real 0m2.799s 00:07:15.765 user 0m2.502s 00:07:15.765 sys 0m0.289s 00:07:15.765 01:28:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.765 01:28:28 -- common/autotest_common.sh@10 -- # set +x 00:07:15.765 ************************************ 00:07:15.765 END TEST accel_copy_crc32c_C2 00:07:15.765 ************************************ 00:07:15.765 01:28:28 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:15.765 01:28:28 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:15.765 01:28:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.765 01:28:28 -- common/autotest_common.sh@10 -- # set +x 00:07:15.765 ************************************ 00:07:15.765 START TEST accel_dualcast 00:07:15.765 ************************************ 00:07:15.765 01:28:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:15.765 01:28:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.765 01:28:28 -- accel/accel.sh@17 -- # local accel_module 00:07:15.765 01:28:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:15.765 01:28:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:15.765 01:28:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.765 01:28:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.765 01:28:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.765 01:28:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.765 01:28:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.765 01:28:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.765 01:28:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.765 01:28:28 -- accel/accel.sh@42 -- # jq -r . 00:07:15.765 [2024-07-23 01:28:28.619243] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:15.765 [2024-07-23 01:28:28.619323] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667218 ] 00:07:15.765 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.765 [2024-07-23 01:28:28.681705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.765 [2024-07-23 01:28:28.770729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.138 01:28:29 -- accel/accel.sh@18 -- # out=' 00:07:17.138 SPDK Configuration: 00:07:17.138 Core mask: 0x1 00:07:17.138 00:07:17.138 Accel Perf Configuration: 00:07:17.138 Workload Type: dualcast 00:07:17.138 Transfer size: 4096 bytes 00:07:17.138 Vector count 1 00:07:17.138 Module: software 00:07:17.138 Queue depth: 32 00:07:17.138 Allocate depth: 32 00:07:17.138 # threads/core: 1 00:07:17.138 Run time: 1 seconds 00:07:17.138 Verify: Yes 00:07:17.138 00:07:17.138 Running for 1 seconds... 00:07:17.138 00:07:17.138 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.138 ------------------------------------------------------------------------------------ 00:07:17.138 0,0 299232/s 1168 MiB/s 0 0 00:07:17.138 ==================================================================================== 00:07:17.138 Total 299232/s 1168 MiB/s 0 0' 00:07:17.138 01:28:29 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:17.138 01:28:29 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.138 01:28:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.138 01:28:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.138 01:28:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.138 01:28:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.138 01:28:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.138 01:28:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.138 01:28:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.138 01:28:29 -- accel/accel.sh@42 -- # jq -r . 00:07:17.138 [2024-07-23 01:28:30.008041] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:17.138 [2024-07-23 01:28:30.008110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667475 ] 00:07:17.138 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.138 [2024-07-23 01:28:30.073359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.138 [2024-07-23 01:28:30.163571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=0x1 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=dualcast 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=software 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=32 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=32 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=1 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.138 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.138 01:28:30 -- accel/accel.sh@21 -- # val=Yes 00:07:17.138 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.139 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.139 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:17.139 01:28:30 -- accel/accel.sh@21 -- # val= 00:07:17.139 01:28:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # IFS=: 00:07:17.139 01:28:30 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@21 -- # val= 00:07:18.510 01:28:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # IFS=: 00:07:18.510 01:28:31 -- accel/accel.sh@20 -- # read -r var val 00:07:18.510 01:28:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:18.510 01:28:31 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:18.510 01:28:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.510 00:07:18.510 real 0m2.784s 00:07:18.510 user 0m2.493s 00:07:18.510 sys 0m0.283s 00:07:18.510 01:28:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.510 01:28:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.510 ************************************ 00:07:18.510 END TEST accel_dualcast 00:07:18.510 ************************************ 00:07:18.510 01:28:31 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:18.510 01:28:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:18.510 01:28:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.510 01:28:31 -- common/autotest_common.sh@10 -- # set +x 00:07:18.510 ************************************ 00:07:18.510 START TEST accel_compare 00:07:18.510 ************************************ 00:07:18.510 01:28:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:18.510 01:28:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.510 01:28:31 -- accel/accel.sh@17 -- # local accel_module 00:07:18.510 01:28:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:18.510 01:28:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:18.510 01:28:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.510 01:28:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.510 01:28:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.510 01:28:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.510 01:28:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.510 01:28:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.510 01:28:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.510 01:28:31 -- accel/accel.sh@42 -- # jq -r . 00:07:18.510 [2024-07-23 01:28:31.428423] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:18.510 [2024-07-23 01:28:31.428498] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667634 ] 00:07:18.510 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.510 [2024-07-23 01:28:31.489712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.510 [2024-07-23 01:28:31.580877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.882 01:28:32 -- accel/accel.sh@18 -- # out=' 00:07:19.882 SPDK Configuration: 00:07:19.882 Core mask: 0x1 00:07:19.882 00:07:19.882 Accel Perf Configuration: 00:07:19.882 Workload Type: compare 00:07:19.882 Transfer size: 4096 bytes 00:07:19.882 Vector count 1 00:07:19.882 Module: software 00:07:19.882 Queue depth: 32 00:07:19.882 Allocate depth: 32 00:07:19.882 # threads/core: 1 00:07:19.882 Run time: 1 seconds 00:07:19.882 Verify: Yes 00:07:19.882 00:07:19.882 Running for 1 seconds... 00:07:19.882 00:07:19.882 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.882 ------------------------------------------------------------------------------------ 00:07:19.882 0,0 395552/s 1545 MiB/s 0 0 00:07:19.882 ==================================================================================== 00:07:19.882 Total 395552/s 1545 MiB/s 0 0' 00:07:19.882 01:28:32 -- accel/accel.sh@20 -- # IFS=: 00:07:19.882 01:28:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:19.882 01:28:32 -- accel/accel.sh@20 -- # read -r var val 00:07:19.882 01:28:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:19.882 01:28:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.882 01:28:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.882 01:28:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.882 01:28:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.882 01:28:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.882 01:28:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.882 01:28:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.882 01:28:32 -- accel/accel.sh@42 -- # jq -r . 00:07:19.882 [2024-07-23 01:28:32.830679] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:19.882 [2024-07-23 01:28:32.830752] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667779 ] 00:07:19.882 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.882 [2024-07-23 01:28:32.895844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.140 [2024-07-23 01:28:32.986344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=0x1 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=compare 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=software 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=32 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=32 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=1 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val=Yes 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:20.140 01:28:33 -- accel/accel.sh@21 -- # val= 00:07:20.140 01:28:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # IFS=: 00:07:20.140 01:28:33 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@21 -- # val= 00:07:21.512 01:28:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # IFS=: 00:07:21.512 01:28:34 -- accel/accel.sh@20 -- # read -r var val 00:07:21.512 01:28:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:21.512 01:28:34 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:21.512 01:28:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.512 00:07:21.512 real 0m2.806s 00:07:21.512 user 0m2.507s 00:07:21.512 sys 0m0.291s 00:07:21.512 01:28:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.512 01:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.512 ************************************ 00:07:21.512 END TEST accel_compare 00:07:21.512 ************************************ 00:07:21.512 01:28:34 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:21.512 01:28:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:21.512 01:28:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:21.512 01:28:34 -- common/autotest_common.sh@10 -- # set +x 00:07:21.512 ************************************ 00:07:21.512 START TEST accel_xor 00:07:21.512 ************************************ 00:07:21.512 01:28:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:21.512 01:28:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.512 01:28:34 -- accel/accel.sh@17 -- # local accel_module 00:07:21.512 01:28:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:21.512 01:28:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:21.512 01:28:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.512 01:28:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.512 01:28:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.512 01:28:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.512 01:28:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.512 01:28:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.512 01:28:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.512 01:28:34 -- accel/accel.sh@42 -- # jq -r . 00:07:21.512 [2024-07-23 01:28:34.257792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:21.512 [2024-07-23 01:28:34.257866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3667938 ] 00:07:21.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.512 [2024-07-23 01:28:34.319061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.512 [2024-07-23 01:28:34.413764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.885 01:28:35 -- accel/accel.sh@18 -- # out=' 00:07:22.885 SPDK Configuration: 00:07:22.885 Core mask: 0x1 00:07:22.885 00:07:22.885 Accel Perf Configuration: 00:07:22.885 Workload Type: xor 00:07:22.885 Source buffers: 2 00:07:22.885 Transfer size: 4096 bytes 00:07:22.885 Vector count 1 00:07:22.885 Module: software 00:07:22.885 Queue depth: 32 00:07:22.885 Allocate depth: 32 00:07:22.885 # threads/core: 1 00:07:22.885 Run time: 1 seconds 00:07:22.885 Verify: Yes 00:07:22.885 00:07:22.885 Running for 1 seconds... 00:07:22.885 00:07:22.885 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.885 ------------------------------------------------------------------------------------ 00:07:22.885 0,0 192576/s 752 MiB/s 0 0 00:07:22.885 ==================================================================================== 00:07:22.885 Total 192576/s 752 MiB/s 0 0' 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:22.885 01:28:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.885 01:28:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.885 01:28:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.885 01:28:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.885 01:28:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.885 01:28:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.885 01:28:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.885 01:28:35 -- accel/accel.sh@42 -- # jq -r . 00:07:22.885 [2024-07-23 01:28:35.651981] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:22.885 [2024-07-23 01:28:35.652051] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668196 ] 00:07:22.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.885 [2024-07-23 01:28:35.714495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.885 [2024-07-23 01:28:35.804652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=0x1 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=xor 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=2 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=software 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=32 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=32 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=1 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val=Yes 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:22.885 01:28:35 -- accel/accel.sh@21 -- # val= 00:07:22.885 01:28:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # IFS=: 00:07:22.885 01:28:35 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@21 -- # val= 00:07:24.258 01:28:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # IFS=: 00:07:24.258 01:28:37 -- accel/accel.sh@20 -- # read -r var val 00:07:24.258 01:28:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.259 01:28:37 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:24.259 01:28:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.259 00:07:24.259 real 0m2.794s 00:07:24.259 user 0m2.510s 00:07:24.259 sys 0m0.276s 00:07:24.259 01:28:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.259 01:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.259 ************************************ 00:07:24.259 END TEST accel_xor 00:07:24.259 ************************************ 00:07:24.259 01:28:37 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:24.259 01:28:37 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:24.259 01:28:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:24.259 01:28:37 -- common/autotest_common.sh@10 -- # set +x 00:07:24.259 ************************************ 00:07:24.259 START TEST accel_xor 00:07:24.259 ************************************ 00:07:24.259 01:28:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:24.259 01:28:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.259 01:28:37 -- accel/accel.sh@17 -- # local accel_module 00:07:24.259 01:28:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:24.259 01:28:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:24.259 01:28:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.259 01:28:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.259 01:28:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.259 01:28:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.259 01:28:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.259 01:28:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.259 01:28:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.259 01:28:37 -- accel/accel.sh@42 -- # jq -r . 00:07:24.259 [2024-07-23 01:28:37.079248] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:24.259 [2024-07-23 01:28:37.079325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668361 ] 00:07:24.259 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.259 [2024-07-23 01:28:37.140964] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.259 [2024-07-23 01:28:37.232005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.663 01:28:38 -- accel/accel.sh@18 -- # out=' 00:07:25.663 SPDK Configuration: 00:07:25.663 Core mask: 0x1 00:07:25.663 00:07:25.663 Accel Perf Configuration: 00:07:25.663 Workload Type: xor 00:07:25.663 Source buffers: 3 00:07:25.663 Transfer size: 4096 bytes 00:07:25.663 Vector count 1 00:07:25.663 Module: software 00:07:25.663 Queue depth: 32 00:07:25.663 Allocate depth: 32 00:07:25.663 # threads/core: 1 00:07:25.663 Run time: 1 seconds 00:07:25.663 Verify: Yes 00:07:25.663 00:07:25.663 Running for 1 seconds... 00:07:25.663 00:07:25.663 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.663 ------------------------------------------------------------------------------------ 00:07:25.663 0,0 184608/s 721 MiB/s 0 0 00:07:25.663 ==================================================================================== 00:07:25.663 Total 184608/s 721 MiB/s 0 0' 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.663 01:28:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.663 01:28:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:25.663 01:28:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.663 01:28:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.663 01:28:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.663 01:28:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.663 01:28:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.663 01:28:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.663 01:28:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.663 01:28:38 -- accel/accel.sh@42 -- # jq -r . 00:07:25.663 [2024-07-23 01:28:38.483119] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:25.663 [2024-07-23 01:28:38.483201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668503 ] 00:07:25.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.663 [2024-07-23 01:28:38.547408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.663 [2024-07-23 01:28:38.637595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.663 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.663 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.663 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.663 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.663 01:28:38 -- accel/accel.sh@21 -- # val=0x1 00:07:25.663 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.663 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.663 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.663 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=xor 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=3 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=software 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=32 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=32 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=1 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val=Yes 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:25.664 01:28:38 -- accel/accel.sh@21 -- # val= 00:07:25.664 01:28:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # IFS=: 00:07:25.664 01:28:38 -- accel/accel.sh@20 -- # read -r var val 00:07:27.038 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.038 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.038 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.038 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.039 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.039 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.039 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.039 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@21 -- # val= 00:07:27.039 01:28:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # IFS=: 00:07:27.039 01:28:39 -- accel/accel.sh@20 -- # read -r var val 00:07:27.039 01:28:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:27.039 01:28:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:27.039 01:28:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.039 00:07:27.039 real 0m2.810s 00:07:27.039 user 0m2.514s 00:07:27.039 sys 0m0.289s 00:07:27.039 01:28:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.039 01:28:39 -- common/autotest_common.sh@10 -- # set +x 00:07:27.039 ************************************ 00:07:27.039 END TEST accel_xor 00:07:27.039 ************************************ 00:07:27.039 01:28:39 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:27.039 01:28:39 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:27.039 01:28:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.039 01:28:39 -- common/autotest_common.sh@10 -- # set +x 00:07:27.039 ************************************ 00:07:27.039 START TEST accel_dif_verify 00:07:27.039 ************************************ 00:07:27.039 01:28:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:27.039 01:28:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.039 01:28:39 -- accel/accel.sh@17 -- # local accel_module 00:07:27.039 01:28:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:27.039 01:28:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:27.039 01:28:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.039 01:28:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.039 01:28:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.039 01:28:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.039 01:28:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.039 01:28:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.039 01:28:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.039 01:28:39 -- accel/accel.sh@42 -- # jq -r . 00:07:27.039 [2024-07-23 01:28:39.913809] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:27.039 [2024-07-23 01:28:39.913881] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668666 ] 00:07:27.039 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.039 [2024-07-23 01:28:39.976255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.039 [2024-07-23 01:28:40.076601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.412 01:28:41 -- accel/accel.sh@18 -- # out=' 00:07:28.412 SPDK Configuration: 00:07:28.412 Core mask: 0x1 00:07:28.412 00:07:28.412 Accel Perf Configuration: 00:07:28.412 Workload Type: dif_verify 00:07:28.412 Vector size: 4096 bytes 00:07:28.412 Transfer size: 4096 bytes 00:07:28.412 Block size: 512 bytes 00:07:28.412 Metadata size: 8 bytes 00:07:28.412 Vector count 1 00:07:28.412 Module: software 00:07:28.412 Queue depth: 32 00:07:28.412 Allocate depth: 32 00:07:28.412 # threads/core: 1 00:07:28.412 Run time: 1 seconds 00:07:28.412 Verify: No 00:07:28.412 00:07:28.412 Running for 1 seconds... 00:07:28.412 00:07:28.412 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.412 ------------------------------------------------------------------------------------ 00:07:28.412 0,0 81248/s 322 MiB/s 0 0 00:07:28.412 ==================================================================================== 00:07:28.412 Total 81248/s 317 MiB/s 0 0' 00:07:28.412 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.412 01:28:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:28.412 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.413 01:28:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:28.413 01:28:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.413 01:28:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.413 01:28:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.413 01:28:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.413 01:28:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.413 01:28:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.413 01:28:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.413 01:28:41 -- accel/accel.sh@42 -- # jq -r . 00:07:28.413 [2024-07-23 01:28:41.330980] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:28.413 [2024-07-23 01:28:41.331062] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3668926 ] 00:07:28.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.413 [2024-07-23 01:28:41.396015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.413 [2024-07-23 01:28:41.486529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=0x1 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=dif_verify 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=software 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=32 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=32 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=1 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val=No 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:28.672 01:28:41 -- accel/accel.sh@21 -- # val= 00:07:28.672 01:28:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # IFS=: 00:07:28.672 01:28:41 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@21 -- # val= 00:07:30.057 01:28:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # IFS=: 00:07:30.057 01:28:42 -- accel/accel.sh@20 -- # read -r var val 00:07:30.057 01:28:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:30.057 01:28:42 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:30.057 01:28:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.057 00:07:30.057 real 0m2.821s 00:07:30.057 user 0m2.524s 00:07:30.057 sys 0m0.291s 00:07:30.057 01:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.057 01:28:42 -- common/autotest_common.sh@10 -- # set +x 00:07:30.057 ************************************ 00:07:30.057 END TEST accel_dif_verify 00:07:30.057 ************************************ 00:07:30.057 01:28:42 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:30.057 01:28:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:30.057 01:28:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.057 01:28:42 -- common/autotest_common.sh@10 -- # set +x 00:07:30.057 ************************************ 00:07:30.057 START TEST accel_dif_generate 00:07:30.057 ************************************ 00:07:30.057 01:28:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:30.057 01:28:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.057 01:28:42 -- accel/accel.sh@17 -- # local accel_module 00:07:30.057 01:28:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:30.057 01:28:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:30.057 01:28:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.057 01:28:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.057 01:28:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.057 01:28:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.057 01:28:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.057 01:28:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.057 01:28:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.057 01:28:42 -- accel/accel.sh@42 -- # jq -r . 00:07:30.057 [2024-07-23 01:28:42.759755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:30.057 [2024-07-23 01:28:42.759837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669083 ] 00:07:30.057 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.057 [2024-07-23 01:28:42.821688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.057 [2024-07-23 01:28:42.912447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.431 01:28:44 -- accel/accel.sh@18 -- # out=' 00:07:31.431 SPDK Configuration: 00:07:31.431 Core mask: 0x1 00:07:31.431 00:07:31.431 Accel Perf Configuration: 00:07:31.431 Workload Type: dif_generate 00:07:31.431 Vector size: 4096 bytes 00:07:31.432 Transfer size: 4096 bytes 00:07:31.432 Block size: 512 bytes 00:07:31.432 Metadata size: 8 bytes 00:07:31.432 Vector count 1 00:07:31.432 Module: software 00:07:31.432 Queue depth: 32 00:07:31.432 Allocate depth: 32 00:07:31.432 # threads/core: 1 00:07:31.432 Run time: 1 seconds 00:07:31.432 Verify: No 00:07:31.432 00:07:31.432 Running for 1 seconds... 00:07:31.432 00:07:31.432 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.432 ------------------------------------------------------------------------------------ 00:07:31.432 0,0 96288/s 382 MiB/s 0 0 00:07:31.432 ==================================================================================== 00:07:31.432 Total 96288/s 376 MiB/s 0 0' 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:31.432 01:28:44 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.432 01:28:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.432 01:28:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.432 01:28:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.432 01:28:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.432 01:28:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.432 01:28:44 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.432 01:28:44 -- accel/accel.sh@42 -- # jq -r . 00:07:31.432 [2024-07-23 01:28:44.163854] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:31.432 [2024-07-23 01:28:44.163953] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669230 ] 00:07:31.432 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.432 [2024-07-23 01:28:44.227990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.432 [2024-07-23 01:28:44.318505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=0x1 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=dif_generate 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=software 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=32 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=32 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=1 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val=No 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:31.432 01:28:44 -- accel/accel.sh@21 -- # val= 00:07:31.432 01:28:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # IFS=: 00:07:31.432 01:28:44 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@21 -- # val= 00:07:32.853 01:28:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # IFS=: 00:07:32.853 01:28:45 -- accel/accel.sh@20 -- # read -r var val 00:07:32.853 01:28:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.853 01:28:45 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:32.853 01:28:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.853 00:07:32.853 real 0m2.816s 00:07:32.853 user 0m2.525s 00:07:32.853 sys 0m0.286s 00:07:32.853 01:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.853 01:28:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.853 ************************************ 00:07:32.853 END TEST accel_dif_generate 00:07:32.853 ************************************ 00:07:32.853 01:28:45 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:32.853 01:28:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:32.853 01:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.853 01:28:45 -- common/autotest_common.sh@10 -- # set +x 00:07:32.853 ************************************ 00:07:32.853 START TEST accel_dif_generate_copy 00:07:32.853 ************************************ 00:07:32.853 01:28:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:32.853 01:28:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.853 01:28:45 -- accel/accel.sh@17 -- # local accel_module 00:07:32.853 01:28:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:32.853 01:28:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:32.853 01:28:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.853 01:28:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.853 01:28:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.853 01:28:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.853 01:28:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.853 01:28:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.853 01:28:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.853 01:28:45 -- accel/accel.sh@42 -- # jq -r . 00:07:32.853 [2024-07-23 01:28:45.603575] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.853 [2024-07-23 01:28:45.603662] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669388 ] 00:07:32.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.853 [2024-07-23 01:28:45.667817] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.853 [2024-07-23 01:28:45.759269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.229 01:28:46 -- accel/accel.sh@18 -- # out=' 00:07:34.229 SPDK Configuration: 00:07:34.229 Core mask: 0x1 00:07:34.229 00:07:34.229 Accel Perf Configuration: 00:07:34.229 Workload Type: dif_generate_copy 00:07:34.229 Vector size: 4096 bytes 00:07:34.229 Transfer size: 4096 bytes 00:07:34.229 Vector count 1 00:07:34.229 Module: software 00:07:34.229 Queue depth: 32 00:07:34.229 Allocate depth: 32 00:07:34.229 # threads/core: 1 00:07:34.229 Run time: 1 seconds 00:07:34.229 Verify: No 00:07:34.229 00:07:34.229 Running for 1 seconds... 00:07:34.229 00:07:34.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.229 ------------------------------------------------------------------------------------ 00:07:34.229 0,0 75776/s 300 MiB/s 0 0 00:07:34.229 ==================================================================================== 00:07:34.229 Total 75776/s 296 MiB/s 0 0' 00:07:34.229 01:28:46 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:34.229 01:28:46 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:34.229 01:28:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.229 01:28:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.229 01:28:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.229 01:28:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.229 01:28:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.229 01:28:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.229 01:28:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.229 01:28:46 -- accel/accel.sh@42 -- # jq -r . 00:07:34.229 [2024-07-23 01:28:47.010297] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.229 [2024-07-23 01:28:47.010380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669655 ] 00:07:34.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.229 [2024-07-23 01:28:47.075034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.229 [2024-07-23 01:28:47.165575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val=0x1 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.229 01:28:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.229 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.229 01:28:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.229 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val=software 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val=32 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val=32 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val=1 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val=No 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:34.230 01:28:47 -- accel/accel.sh@21 -- # val= 00:07:34.230 01:28:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # IFS=: 00:07:34.230 01:28:47 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@21 -- # val= 00:07:35.603 01:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # IFS=: 00:07:35.603 01:28:48 -- accel/accel.sh@20 -- # read -r var val 00:07:35.603 01:28:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.603 01:28:48 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:35.603 01:28:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.603 00:07:35.603 real 0m2.812s 00:07:35.603 user 0m2.521s 00:07:35.603 sys 0m0.282s 00:07:35.603 01:28:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.603 01:28:48 -- common/autotest_common.sh@10 -- # set +x 00:07:35.603 ************************************ 00:07:35.603 END TEST accel_dif_generate_copy 00:07:35.603 ************************************ 00:07:35.603 01:28:48 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:35.603 01:28:48 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.603 01:28:48 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:35.603 01:28:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.603 01:28:48 -- common/autotest_common.sh@10 -- # set +x 00:07:35.603 ************************************ 00:07:35.603 START TEST accel_comp 00:07:35.603 ************************************ 00:07:35.603 01:28:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.603 01:28:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.603 01:28:48 -- accel/accel.sh@17 -- # local accel_module 00:07:35.603 01:28:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.603 01:28:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:35.603 01:28:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.603 01:28:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.603 01:28:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.603 01:28:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.603 01:28:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.603 01:28:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.603 01:28:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.603 01:28:48 -- accel/accel.sh@42 -- # jq -r . 00:07:35.603 [2024-07-23 01:28:48.440981] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.603 [2024-07-23 01:28:48.441056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669812 ] 00:07:35.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.603 [2024-07-23 01:28:48.502829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.603 [2024-07-23 01:28:48.594399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.978 01:28:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.978 00:07:36.978 SPDK Configuration: 00:07:36.978 Core mask: 0x1 00:07:36.978 00:07:36.978 Accel Perf Configuration: 00:07:36.978 Workload Type: compress 00:07:36.978 Transfer size: 4096 bytes 00:07:36.978 Vector count 1 00:07:36.978 Module: software 00:07:36.978 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.978 Queue depth: 32 00:07:36.978 Allocate depth: 32 00:07:36.978 # threads/core: 1 00:07:36.978 Run time: 1 seconds 00:07:36.978 Verify: No 00:07:36.978 00:07:36.978 Running for 1 seconds... 00:07:36.978 00:07:36.978 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.978 ------------------------------------------------------------------------------------ 00:07:36.978 0,0 32512/s 135 MiB/s 0 0 00:07:36.978 ==================================================================================== 00:07:36.978 Total 32512/s 127 MiB/s 0 0' 00:07:36.978 01:28:49 -- accel/accel.sh@20 -- # IFS=: 00:07:36.978 01:28:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.978 01:28:49 -- accel/accel.sh@20 -- # read -r var val 00:07:36.978 01:28:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:36.978 01:28:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.978 01:28:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.978 01:28:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.978 01:28:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.978 01:28:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.978 01:28:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.978 01:28:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.978 01:28:49 -- accel/accel.sh@42 -- # jq -r . 00:07:36.978 [2024-07-23 01:28:49.852712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:36.978 [2024-07-23 01:28:49.852792] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3669953 ] 00:07:36.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.978 [2024-07-23 01:28:49.917420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.978 [2024-07-23 01:28:50.009642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.978 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:36.978 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.978 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:36.978 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:36.978 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:36.978 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.978 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:36.978 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val=0x1 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val=compress 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.236 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.236 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.236 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=software 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=32 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=32 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=1 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val=No 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:37.237 01:28:50 -- accel/accel.sh@21 -- # val= 00:07:37.237 01:28:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # IFS=: 00:07:37.237 01:28:50 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@21 -- # val= 00:07:38.171 01:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # IFS=: 00:07:38.171 01:28:51 -- accel/accel.sh@20 -- # read -r var val 00:07:38.171 01:28:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.171 01:28:51 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:38.171 01:28:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.171 00:07:38.171 real 0m2.829s 00:07:38.171 user 0m2.527s 00:07:38.171 sys 0m0.296s 00:07:38.171 01:28:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.171 01:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.171 ************************************ 00:07:38.171 END TEST accel_comp 00:07:38.171 ************************************ 00:07:38.430 01:28:51 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.430 01:28:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:38.430 01:28:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.430 01:28:51 -- common/autotest_common.sh@10 -- # set +x 00:07:38.430 ************************************ 00:07:38.430 START TEST accel_decomp 00:07:38.430 ************************************ 00:07:38.430 01:28:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.430 01:28:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.430 01:28:51 -- accel/accel.sh@17 -- # local accel_module 00:07:38.430 01:28:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.430 01:28:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:38.430 01:28:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.430 01:28:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.430 01:28:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.430 01:28:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.430 01:28:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.430 01:28:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.430 01:28:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.430 01:28:51 -- accel/accel.sh@42 -- # jq -r . 00:07:38.430 [2024-07-23 01:28:51.290957] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:38.430 [2024-07-23 01:28:51.291036] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670132 ] 00:07:38.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.430 [2024-07-23 01:28:51.355307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.430 [2024-07-23 01:28:51.446068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.802 01:28:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.802 00:07:39.802 SPDK Configuration: 00:07:39.802 Core mask: 0x1 00:07:39.802 00:07:39.802 Accel Perf Configuration: 00:07:39.802 Workload Type: decompress 00:07:39.802 Transfer size: 4096 bytes 00:07:39.802 Vector count 1 00:07:39.802 Module: software 00:07:39.802 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:39.802 Queue depth: 32 00:07:39.802 Allocate depth: 32 00:07:39.802 # threads/core: 1 00:07:39.802 Run time: 1 seconds 00:07:39.802 Verify: Yes 00:07:39.802 00:07:39.802 Running for 1 seconds... 00:07:39.802 00:07:39.803 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.803 ------------------------------------------------------------------------------------ 00:07:39.803 0,0 55744/s 102 MiB/s 0 0 00:07:39.803 ==================================================================================== 00:07:39.803 Total 55744/s 217 MiB/s 0 0' 00:07:39.803 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:39.803 01:28:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:39.803 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:39.803 01:28:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:39.803 01:28:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.803 01:28:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.803 01:28:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.803 01:28:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.803 01:28:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.803 01:28:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.803 01:28:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.803 01:28:52 -- accel/accel.sh@42 -- # jq -r . 00:07:39.803 [2024-07-23 01:28:52.695057] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.803 [2024-07-23 01:28:52.695140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670376 ] 00:07:39.803 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.803 [2024-07-23 01:28:52.757335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.803 [2024-07-23 01:28:52.847978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=0x1 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=decompress 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=software 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=32 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=32 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=1 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val=Yes 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.059 01:28:52 -- accel/accel.sh@21 -- # val= 00:07:40.059 01:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # IFS=: 00:07:40.059 01:28:52 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@21 -- # val= 00:07:40.992 01:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # IFS=: 00:07:40.992 01:28:54 -- accel/accel.sh@20 -- # read -r var val 00:07:40.992 01:28:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.992 01:28:54 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.992 01:28:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.992 00:07:40.992 real 0m2.808s 00:07:40.992 user 0m2.508s 00:07:40.992 sys 0m0.294s 00:07:40.992 01:28:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.992 01:28:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.992 ************************************ 00:07:40.992 END TEST accel_decomp 00:07:40.992 ************************************ 00:07:41.250 01:28:54 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.250 01:28:54 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:41.250 01:28:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.250 01:28:54 -- common/autotest_common.sh@10 -- # set +x 00:07:41.250 ************************************ 00:07:41.250 START TEST accel_decmop_full 00:07:41.250 ************************************ 00:07:41.250 01:28:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.250 01:28:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.250 01:28:54 -- accel/accel.sh@17 -- # local accel_module 00:07:41.250 01:28:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.250 01:28:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:41.250 01:28:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.250 01:28:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.250 01:28:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.250 01:28:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.250 01:28:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.250 01:28:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.250 01:28:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.250 01:28:54 -- accel/accel.sh@42 -- # jq -r . 00:07:41.250 [2024-07-23 01:28:54.124572] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:41.250 [2024-07-23 01:28:54.124764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670539 ] 00:07:41.250 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.250 [2024-07-23 01:28:54.187830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.250 [2024-07-23 01:28:54.278885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.621 01:28:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.621 00:07:42.621 SPDK Configuration: 00:07:42.621 Core mask: 0x1 00:07:42.621 00:07:42.621 Accel Perf Configuration: 00:07:42.621 Workload Type: decompress 00:07:42.621 Transfer size: 111250 bytes 00:07:42.621 Vector count 1 00:07:42.621 Module: software 00:07:42.621 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.621 Queue depth: 32 00:07:42.621 Allocate depth: 32 00:07:42.621 # threads/core: 1 00:07:42.621 Run time: 1 seconds 00:07:42.621 Verify: Yes 00:07:42.621 00:07:42.621 Running for 1 seconds... 00:07:42.621 00:07:42.621 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.621 ------------------------------------------------------------------------------------ 00:07:42.621 0,0 3808/s 157 MiB/s 0 0 00:07:42.621 ==================================================================================== 00:07:42.621 Total 3808/s 404 MiB/s 0 0' 00:07:42.621 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.621 01:28:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.621 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.621 01:28:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:42.621 01:28:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.621 01:28:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.621 01:28:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.621 01:28:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.621 01:28:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.621 01:28:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.621 01:28:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.621 01:28:55 -- accel/accel.sh@42 -- # jq -r . 00:07:42.621 [2024-07-23 01:28:55.539427] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:42.621 [2024-07-23 01:28:55.539508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670680 ] 00:07:42.621 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.621 [2024-07-23 01:28:55.604007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.621 [2024-07-23 01:28:55.693992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=0x1 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=decompress 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=software 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=32 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=32 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=1 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val=Yes 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:42.879 01:28:55 -- accel/accel.sh@21 -- # val= 00:07:42.879 01:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # IFS=: 00:07:42.879 01:28:55 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@21 -- # val= 00:07:44.252 01:28:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # IFS=: 00:07:44.252 01:28:56 -- accel/accel.sh@20 -- # read -r var val 00:07:44.252 01:28:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.252 01:28:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.252 01:28:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.252 00:07:44.252 real 0m2.835s 00:07:44.252 user 0m2.538s 00:07:44.252 sys 0m0.290s 00:07:44.252 01:28:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.252 01:28:56 -- common/autotest_common.sh@10 -- # set +x 00:07:44.252 ************************************ 00:07:44.252 END TEST accel_decmop_full 00:07:44.252 ************************************ 00:07:44.252 01:28:56 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.252 01:28:56 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:44.252 01:28:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.252 01:28:56 -- common/autotest_common.sh@10 -- # set +x 00:07:44.252 ************************************ 00:07:44.252 START TEST accel_decomp_mcore 00:07:44.252 ************************************ 00:07:44.252 01:28:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.253 01:28:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.253 01:28:56 -- accel/accel.sh@17 -- # local accel_module 00:07:44.253 01:28:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.253 01:28:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:44.253 01:28:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.253 01:28:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.253 01:28:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.253 01:28:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.253 01:28:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.253 01:28:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.253 01:28:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.253 01:28:56 -- accel/accel.sh@42 -- # jq -r . 00:07:44.253 [2024-07-23 01:28:56.986053] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:44.253 [2024-07-23 01:28:56.986128] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670908 ] 00:07:44.253 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.253 [2024-07-23 01:28:57.047933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.253 [2024-07-23 01:28:57.140572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.253 [2024-07-23 01:28:57.140642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.253 [2024-07-23 01:28:57.140692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.253 [2024-07-23 01:28:57.140694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.626 01:28:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.626 00:07:45.626 SPDK Configuration: 00:07:45.626 Core mask: 0xf 00:07:45.626 00:07:45.626 Accel Perf Configuration: 00:07:45.626 Workload Type: decompress 00:07:45.626 Transfer size: 4096 bytes 00:07:45.626 Vector count 1 00:07:45.626 Module: software 00:07:45.626 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.626 Queue depth: 32 00:07:45.626 Allocate depth: 32 00:07:45.626 # threads/core: 1 00:07:45.626 Run time: 1 seconds 00:07:45.626 Verify: Yes 00:07:45.626 00:07:45.626 Running for 1 seconds... 00:07:45.626 00:07:45.626 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.626 ------------------------------------------------------------------------------------ 00:07:45.626 0,0 56864/s 104 MiB/s 0 0 00:07:45.626 3,0 57184/s 105 MiB/s 0 0 00:07:45.626 2,0 57248/s 105 MiB/s 0 0 00:07:45.626 1,0 57152/s 105 MiB/s 0 0 00:07:45.626 ==================================================================================== 00:07:45.626 Total 228448/s 892 MiB/s 0 0' 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.626 01:28:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.626 01:28:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.626 01:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.626 01:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.626 01:28:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.626 01:28:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.626 01:28:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.626 01:28:58 -- accel/accel.sh@42 -- # jq -r . 00:07:45.626 [2024-07-23 01:28:58.388051] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:45.626 [2024-07-23 01:28:58.388136] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671106 ] 00:07:45.626 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.626 [2024-07-23 01:28:58.449391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.626 [2024-07-23 01:28:58.542655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.626 [2024-07-23 01:28:58.542721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.626 [2024-07-23 01:28:58.542823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.626 [2024-07-23 01:28:58.542820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=0xf 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=decompress 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=software 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=32 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=32 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=1 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val=Yes 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:45.626 01:28:58 -- accel/accel.sh@21 -- # val= 00:07:45.626 01:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # IFS=: 00:07:45.626 01:28:58 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@21 -- # val= 00:07:47.000 01:28:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # IFS=: 00:07:47.000 01:28:59 -- accel/accel.sh@20 -- # read -r var val 00:07:47.000 01:28:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.000 01:28:59 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.000 01:28:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.000 00:07:47.000 real 0m2.815s 00:07:47.000 user 0m9.384s 00:07:47.000 sys 0m0.306s 00:07:47.000 01:28:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.000 01:28:59 -- common/autotest_common.sh@10 -- # set +x 00:07:47.000 ************************************ 00:07:47.000 END TEST accel_decomp_mcore 00:07:47.000 ************************************ 00:07:47.000 01:28:59 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.000 01:28:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:47.000 01:28:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.000 01:28:59 -- common/autotest_common.sh@10 -- # set +x 00:07:47.000 ************************************ 00:07:47.000 START TEST accel_decomp_full_mcore 00:07:47.001 ************************************ 00:07:47.001 01:28:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.001 01:28:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.001 01:28:59 -- accel/accel.sh@17 -- # local accel_module 00:07:47.001 01:28:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.001 01:28:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:47.001 01:28:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.001 01:28:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.001 01:28:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.001 01:28:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.001 01:28:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.001 01:28:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.001 01:28:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.001 01:28:59 -- accel/accel.sh@42 -- # jq -r . 00:07:47.001 [2024-07-23 01:28:59.824857] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:47.001 [2024-07-23 01:28:59.824943] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671266 ] 00:07:47.001 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.001 [2024-07-23 01:28:59.888412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.001 [2024-07-23 01:28:59.981740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.001 [2024-07-23 01:28:59.981795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.001 [2024-07-23 01:28:59.981914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.001 [2024-07-23 01:28:59.981917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.373 01:29:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.373 00:07:48.373 SPDK Configuration: 00:07:48.373 Core mask: 0xf 00:07:48.373 00:07:48.373 Accel Perf Configuration: 00:07:48.373 Workload Type: decompress 00:07:48.373 Transfer size: 111250 bytes 00:07:48.373 Vector count 1 00:07:48.373 Module: software 00:07:48.373 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.373 Queue depth: 32 00:07:48.373 Allocate depth: 32 00:07:48.373 # threads/core: 1 00:07:48.373 Run time: 1 seconds 00:07:48.373 Verify: Yes 00:07:48.373 00:07:48.373 Running for 1 seconds... 00:07:48.373 00:07:48.373 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.373 ------------------------------------------------------------------------------------ 00:07:48.373 0,0 4256/s 175 MiB/s 0 0 00:07:48.373 3,0 4256/s 175 MiB/s 0 0 00:07:48.373 2,0 4256/s 175 MiB/s 0 0 00:07:48.373 1,0 4256/s 175 MiB/s 0 0 00:07:48.373 ==================================================================================== 00:07:48.373 Total 17024/s 1806 MiB/s 0 0' 00:07:48.373 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.373 01:29:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.373 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.373 01:29:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.373 01:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.373 01:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.373 01:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.373 01:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.373 01:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.373 01:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.373 01:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.373 01:29:01 -- accel/accel.sh@42 -- # jq -r . 00:07:48.373 [2024-07-23 01:29:01.253733] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:48.373 [2024-07-23 01:29:01.253807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671416 ] 00:07:48.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.373 [2024-07-23 01:29:01.319134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.373 [2024-07-23 01:29:01.412427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.373 [2024-07-23 01:29:01.412483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.373 [2024-07-23 01:29:01.412601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.373 [2024-07-23 01:29:01.412604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val=0xf 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.630 01:29:01 -- accel/accel.sh@21 -- # val=decompress 00:07:48.630 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.630 01:29:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.630 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=software 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=32 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=32 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=1 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val=Yes 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:48.631 01:29:01 -- accel/accel.sh@21 -- # val= 00:07:48.631 01:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # IFS=: 00:07:48.631 01:29:01 -- accel/accel.sh@20 -- # read -r var val 00:07:49.562 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.562 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.562 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.562 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.562 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.562 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.562 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.562 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.562 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.562 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.562 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.820 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.820 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.820 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.820 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.820 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.820 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.820 01:29:02 -- accel/accel.sh@21 -- # val= 00:07:49.820 01:29:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # IFS=: 00:07:49.820 01:29:02 -- accel/accel.sh@20 -- # read -r var val 00:07:49.820 01:29:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:49.820 01:29:02 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:49.820 01:29:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.820 00:07:49.820 real 0m2.857s 00:07:49.820 user 0m9.530s 00:07:49.820 sys 0m0.307s 00:07:49.820 01:29:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.820 01:29:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.820 ************************************ 00:07:49.820 END TEST accel_decomp_full_mcore 00:07:49.820 ************************************ 00:07:49.820 01:29:02 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.820 01:29:02 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:49.820 01:29:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:49.820 01:29:02 -- common/autotest_common.sh@10 -- # set +x 00:07:49.820 ************************************ 00:07:49.820 START TEST accel_decomp_mthread 00:07:49.820 ************************************ 00:07:49.820 01:29:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.820 01:29:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:49.820 01:29:02 -- accel/accel.sh@17 -- # local accel_module 00:07:49.820 01:29:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.820 01:29:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:49.820 01:29:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.820 01:29:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.820 01:29:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.821 01:29:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.821 01:29:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.821 01:29:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.821 01:29:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.821 01:29:02 -- accel/accel.sh@42 -- # jq -r . 00:07:49.821 [2024-07-23 01:29:02.704561] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:49.821 [2024-07-23 01:29:02.704654] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671662 ] 00:07:49.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.821 [2024-07-23 01:29:02.765226] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.821 [2024-07-23 01:29:02.855333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.192 01:29:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.192 00:07:51.192 SPDK Configuration: 00:07:51.192 Core mask: 0x1 00:07:51.192 00:07:51.192 Accel Perf Configuration: 00:07:51.192 Workload Type: decompress 00:07:51.192 Transfer size: 4096 bytes 00:07:51.192 Vector count 1 00:07:51.192 Module: software 00:07:51.192 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.192 Queue depth: 32 00:07:51.192 Allocate depth: 32 00:07:51.192 # threads/core: 2 00:07:51.192 Run time: 1 seconds 00:07:51.192 Verify: Yes 00:07:51.192 00:07:51.192 Running for 1 seconds... 00:07:51.192 00:07:51.192 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.192 ------------------------------------------------------------------------------------ 00:07:51.192 0,1 28160/s 51 MiB/s 0 0 00:07:51.192 0,0 28064/s 51 MiB/s 0 0 00:07:51.192 ==================================================================================== 00:07:51.192 Total 56224/s 219 MiB/s 0 0' 00:07:51.192 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.192 01:29:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.192 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.192 01:29:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.192 01:29:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.192 01:29:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.192 01:29:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.192 01:29:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.192 01:29:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.192 01:29:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.192 01:29:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.193 01:29:04 -- accel/accel.sh@42 -- # jq -r . 00:07:51.193 [2024-07-23 01:29:04.097823] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:51.193 [2024-07-23 01:29:04.097897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3671844 ] 00:07:51.193 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.193 [2024-07-23 01:29:04.159733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.193 [2024-07-23 01:29:04.249807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.450 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.450 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.450 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.450 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.450 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.450 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.450 01:29:04 -- accel/accel.sh@21 -- # val=0x1 00:07:51.450 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.450 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.450 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.450 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=decompress 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=software 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=32 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=32 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=2 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val=Yes 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:51.451 01:29:04 -- accel/accel.sh@21 -- # val= 00:07:51.451 01:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # IFS=: 00:07:51.451 01:29:04 -- accel/accel.sh@20 -- # read -r var val 00:07:52.823 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.823 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.823 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.823 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.823 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.823 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.823 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.823 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.823 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.824 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.824 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.824 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.824 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.824 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.824 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.824 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.824 01:29:05 -- accel/accel.sh@21 -- # val= 00:07:52.824 01:29:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # IFS=: 00:07:52.824 01:29:05 -- accel/accel.sh@20 -- # read -r var val 00:07:52.824 01:29:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:52.824 01:29:05 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:52.824 01:29:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.824 00:07:52.824 real 0m2.803s 00:07:52.824 user 0m2.513s 00:07:52.824 sys 0m0.284s 00:07:52.824 01:29:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.824 01:29:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.824 ************************************ 00:07:52.824 END TEST accel_decomp_mthread 00:07:52.824 ************************************ 00:07:52.824 01:29:05 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.824 01:29:05 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:52.824 01:29:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:52.824 01:29:05 -- common/autotest_common.sh@10 -- # set +x 00:07:52.824 ************************************ 00:07:52.824 START TEST accel_deomp_full_mthread 00:07:52.824 ************************************ 00:07:52.824 01:29:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.824 01:29:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:52.824 01:29:05 -- accel/accel.sh@17 -- # local accel_module 00:07:52.824 01:29:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.824 01:29:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:52.824 01:29:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.824 01:29:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.824 01:29:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.824 01:29:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.824 01:29:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.824 01:29:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.824 01:29:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.824 01:29:05 -- accel/accel.sh@42 -- # jq -r . 00:07:52.824 [2024-07-23 01:29:05.539826] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:52.824 [2024-07-23 01:29:05.539908] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672001 ] 00:07:52.824 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.824 [2024-07-23 01:29:05.598006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.824 [2024-07-23 01:29:05.686773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.197 01:29:06 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:54.197 00:07:54.197 SPDK Configuration: 00:07:54.197 Core mask: 0x1 00:07:54.197 00:07:54.197 Accel Perf Configuration: 00:07:54.197 Workload Type: decompress 00:07:54.197 Transfer size: 111250 bytes 00:07:54.197 Vector count 1 00:07:54.197 Module: software 00:07:54.197 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.197 Queue depth: 32 00:07:54.197 Allocate depth: 32 00:07:54.197 # threads/core: 2 00:07:54.197 Run time: 1 seconds 00:07:54.197 Verify: Yes 00:07:54.197 00:07:54.197 Running for 1 seconds... 00:07:54.197 00:07:54.197 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.197 ------------------------------------------------------------------------------------ 00:07:54.197 0,1 1952/s 80 MiB/s 0 0 00:07:54.197 0,0 1920/s 79 MiB/s 0 0 00:07:54.197 ==================================================================================== 00:07:54.197 Total 3872/s 410 MiB/s 0 0' 00:07:54.197 01:29:06 -- accel/accel.sh@20 -- # IFS=: 00:07:54.197 01:29:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.197 01:29:06 -- accel/accel.sh@20 -- # read -r var val 00:07:54.197 01:29:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.197 01:29:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.197 01:29:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.197 01:29:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.197 01:29:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.197 01:29:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.197 01:29:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.197 01:29:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.197 01:29:06 -- accel/accel.sh@42 -- # jq -r . 00:07:54.197 [2024-07-23 01:29:06.966580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:54.197 [2024-07-23 01:29:06.966696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672142 ] 00:07:54.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.197 [2024-07-23 01:29:07.026939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.197 [2024-07-23 01:29:07.118318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.197 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.197 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.197 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.197 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=0x1 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=decompress 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=software 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=32 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=32 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=2 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val=Yes 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:54.198 01:29:07 -- accel/accel.sh@21 -- # val= 00:07:54.198 01:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # IFS=: 00:07:54.198 01:29:07 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@21 -- # val= 00:07:55.571 01:29:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # IFS=: 00:07:55.571 01:29:08 -- accel/accel.sh@20 -- # read -r var val 00:07:55.571 01:29:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:55.571 01:29:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:55.571 01:29:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.571 00:07:55.571 real 0m2.864s 00:07:55.571 user 0m2.578s 00:07:55.571 sys 0m0.279s 00:07:55.571 01:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.571 01:29:08 -- common/autotest_common.sh@10 -- # set +x 00:07:55.571 ************************************ 00:07:55.571 END TEST accel_deomp_full_mthread 00:07:55.571 ************************************ 00:07:55.571 01:29:08 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:55.571 01:29:08 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.571 01:29:08 -- accel/accel.sh@129 -- # build_accel_config 00:07:55.571 01:29:08 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:55.571 01:29:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.571 01:29:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.571 01:29:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.571 01:29:08 -- common/autotest_common.sh@10 -- # set +x 00:07:55.571 01:29:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.572 01:29:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.572 01:29:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.572 01:29:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.572 01:29:08 -- accel/accel.sh@42 -- # jq -r . 00:07:55.572 ************************************ 00:07:55.572 START TEST accel_dif_functional_tests 00:07:55.572 ************************************ 00:07:55.572 01:29:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:55.572 [2024-07-23 01:29:08.449796] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:55.572 [2024-07-23 01:29:08.449883] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672415 ] 00:07:55.572 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.572 [2024-07-23 01:29:08.511732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:55.572 [2024-07-23 01:29:08.603375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.572 [2024-07-23 01:29:08.603426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:55.572 [2024-07-23 01:29:08.603444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.830 00:07:55.831 00:07:55.831 CUnit - A unit testing framework for C - Version 2.1-3 00:07:55.831 http://cunit.sourceforge.net/ 00:07:55.831 00:07:55.831 00:07:55.831 Suite: accel_dif 00:07:55.831 Test: verify: DIF generated, GUARD check ...passed 00:07:55.831 Test: verify: DIF generated, APPTAG check ...passed 00:07:55.831 Test: verify: DIF generated, REFTAG check ...passed 00:07:55.831 Test: verify: DIF not generated, GUARD check ...[2024-07-23 01:29:08.695006] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.831 [2024-07-23 01:29:08.695084] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:55.831 passed 00:07:55.831 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 01:29:08.695120] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.831 [2024-07-23 01:29:08.695148] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:55.831 passed 00:07:55.831 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 01:29:08.695177] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.831 [2024-07-23 01:29:08.695202] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:55.831 passed 00:07:55.831 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:55.831 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 01:29:08.695267] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:55.831 passed 00:07:55.831 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:55.831 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:55.831 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:55.831 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 01:29:08.695399] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:55.831 passed 00:07:55.831 Test: generate copy: DIF generated, GUARD check ...passed 00:07:55.831 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:55.831 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:55.831 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:55.831 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:55.831 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:55.831 Test: generate copy: iovecs-len validate ...[2024-07-23 01:29:08.695643] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:55.831 passed 00:07:55.831 Test: generate copy: buffer alignment validate ...passed 00:07:55.831 00:07:55.831 Run Summary: Type Total Ran Passed Failed Inactive 00:07:55.831 suites 1 1 n/a 0 0 00:07:55.831 tests 20 20 20 0 0 00:07:55.831 asserts 204 204 204 0 n/a 00:07:55.831 00:07:55.831 Elapsed time = 0.002 seconds 00:07:55.831 00:07:55.831 real 0m0.496s 00:07:55.831 user 0m0.776s 00:07:55.831 sys 0m0.175s 00:07:55.831 01:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.831 01:29:08 -- common/autotest_common.sh@10 -- # set +x 00:07:55.831 ************************************ 00:07:55.831 END TEST accel_dif_functional_tests 00:07:55.831 ************************************ 00:07:56.090 00:07:56.090 real 0m59.769s 00:07:56.090 user 1m7.550s 00:07:56.090 sys 0m7.224s 00:07:56.090 01:29:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.090 01:29:08 -- common/autotest_common.sh@10 -- # set +x 00:07:56.090 ************************************ 00:07:56.090 END TEST accel 00:07:56.090 ************************************ 00:07:56.090 01:29:08 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.090 01:29:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.090 01:29:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.090 01:29:08 -- common/autotest_common.sh@10 -- # set +x 00:07:56.090 ************************************ 00:07:56.090 START TEST accel_rpc 00:07:56.090 ************************************ 00:07:56.090 01:29:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:56.090 * Looking for test storage... 00:07:56.090 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:56.090 01:29:09 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:56.090 01:29:09 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3672489 00:07:56.090 01:29:09 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:56.090 01:29:09 -- accel/accel_rpc.sh@15 -- # waitforlisten 3672489 00:07:56.090 01:29:09 -- common/autotest_common.sh@819 -- # '[' -z 3672489 ']' 00:07:56.090 01:29:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.090 01:29:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:56.091 01:29:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.091 01:29:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:56.091 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.091 [2024-07-23 01:29:09.055685] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:56.091 [2024-07-23 01:29:09.055781] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672489 ] 00:07:56.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.091 [2024-07-23 01:29:09.114631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.349 [2024-07-23 01:29:09.197541] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:56.349 [2024-07-23 01:29:09.197723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.349 01:29:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.349 01:29:09 -- common/autotest_common.sh@852 -- # return 0 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:56.349 01:29:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.349 01:29:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.349 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.349 ************************************ 00:07:56.349 START TEST accel_assign_opcode 00:07:56.349 ************************************ 00:07:56.349 01:29:09 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:56.349 01:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.349 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.349 [2024-07-23 01:29:09.250212] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:56.349 01:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:56.349 01:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.349 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.349 [2024-07-23 01:29:09.258229] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:56.349 01:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.349 01:29:09 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:56.349 01:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.349 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.607 01:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.608 01:29:09 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:56.608 01:29:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.608 01:29:09 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:56.608 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.608 01:29:09 -- accel/accel_rpc.sh@42 -- # grep software 00:07:56.608 01:29:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.608 software 00:07:56.608 00:07:56.608 real 0m0.299s 00:07:56.608 user 0m0.044s 00:07:56.608 sys 0m0.007s 00:07:56.608 01:29:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.608 01:29:09 -- common/autotest_common.sh@10 -- # set +x 00:07:56.608 ************************************ 00:07:56.608 END TEST accel_assign_opcode 00:07:56.608 ************************************ 00:07:56.608 01:29:09 -- accel/accel_rpc.sh@55 -- # killprocess 3672489 00:07:56.608 01:29:09 -- common/autotest_common.sh@926 -- # '[' -z 3672489 ']' 00:07:56.608 01:29:09 -- common/autotest_common.sh@930 -- # kill -0 3672489 00:07:56.608 01:29:09 -- common/autotest_common.sh@931 -- # uname 00:07:56.608 01:29:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:56.608 01:29:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3672489 00:07:56.608 01:29:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:56.608 01:29:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:56.608 01:29:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3672489' 00:07:56.608 killing process with pid 3672489 00:07:56.608 01:29:09 -- common/autotest_common.sh@945 -- # kill 3672489 00:07:56.608 01:29:09 -- common/autotest_common.sh@950 -- # wait 3672489 00:07:57.173 00:07:57.173 real 0m1.048s 00:07:57.173 user 0m0.966s 00:07:57.173 sys 0m0.405s 00:07:57.173 01:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.173 01:29:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.173 ************************************ 00:07:57.173 END TEST accel_rpc 00:07:57.173 ************************************ 00:07:57.173 01:29:10 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:57.173 01:29:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.173 01:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.173 01:29:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.173 ************************************ 00:07:57.173 START TEST app_cmdline 00:07:57.173 ************************************ 00:07:57.173 01:29:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:57.173 * Looking for test storage... 00:07:57.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:57.173 01:29:10 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:57.173 01:29:10 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3672695 00:07:57.174 01:29:10 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:57.174 01:29:10 -- app/cmdline.sh@18 -- # waitforlisten 3672695 00:07:57.174 01:29:10 -- common/autotest_common.sh@819 -- # '[' -z 3672695 ']' 00:07:57.174 01:29:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.174 01:29:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.174 01:29:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.174 01:29:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.174 01:29:10 -- common/autotest_common.sh@10 -- # set +x 00:07:57.174 [2024-07-23 01:29:10.130401] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:57.174 [2024-07-23 01:29:10.130485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3672695 ] 00:07:57.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.174 [2024-07-23 01:29:10.189036] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.174 [2024-07-23 01:29:10.271190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.174 [2024-07-23 01:29:10.271364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.117 01:29:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.117 01:29:11 -- common/autotest_common.sh@852 -- # return 0 00:07:58.117 01:29:11 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:58.433 { 00:07:58.433 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:58.433 "fields": { 00:07:58.433 "major": 24, 00:07:58.433 "minor": 1, 00:07:58.433 "patch": 1, 00:07:58.433 "suffix": "-pre", 00:07:58.433 "commit": "dbef7efac" 00:07:58.433 } 00:07:58.433 } 00:07:58.433 01:29:11 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:58.433 01:29:11 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:58.433 01:29:11 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:58.433 01:29:11 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:58.433 01:29:11 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:58.433 01:29:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.433 01:29:11 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:58.433 01:29:11 -- common/autotest_common.sh@10 -- # set +x 00:07:58.433 01:29:11 -- app/cmdline.sh@26 -- # sort 00:07:58.433 01:29:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.433 01:29:11 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:58.433 01:29:11 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:58.433 01:29:11 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.433 01:29:11 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.433 01:29:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.433 01:29:11 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.433 01:29:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.433 01:29:11 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.433 01:29:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.433 01:29:11 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.433 01:29:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.433 01:29:11 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.433 01:29:11 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:58.433 01:29:11 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:58.716 request: 00:07:58.716 { 00:07:58.716 "method": "env_dpdk_get_mem_stats", 00:07:58.716 "req_id": 1 00:07:58.716 } 00:07:58.716 Got JSON-RPC error response 00:07:58.716 response: 00:07:58.716 { 00:07:58.716 "code": -32601, 00:07:58.716 "message": "Method not found" 00:07:58.716 } 00:07:58.716 01:29:11 -- common/autotest_common.sh@643 -- # es=1 00:07:58.716 01:29:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:58.716 01:29:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:58.716 01:29:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:58.716 01:29:11 -- app/cmdline.sh@1 -- # killprocess 3672695 00:07:58.716 01:29:11 -- common/autotest_common.sh@926 -- # '[' -z 3672695 ']' 00:07:58.716 01:29:11 -- common/autotest_common.sh@930 -- # kill -0 3672695 00:07:58.716 01:29:11 -- common/autotest_common.sh@931 -- # uname 00:07:58.716 01:29:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.716 01:29:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3672695 00:07:58.716 01:29:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.716 01:29:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.716 01:29:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3672695' 00:07:58.716 killing process with pid 3672695 00:07:58.716 01:29:11 -- common/autotest_common.sh@945 -- # kill 3672695 00:07:58.716 01:29:11 -- common/autotest_common.sh@950 -- # wait 3672695 00:07:58.974 00:07:58.974 real 0m2.000s 00:07:58.974 user 0m2.525s 00:07:58.974 sys 0m0.471s 00:07:58.974 01:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.974 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:58.974 ************************************ 00:07:58.974 END TEST app_cmdline 00:07:58.974 ************************************ 00:07:58.974 01:29:12 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:58.974 01:29:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.974 01:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.974 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:58.974 ************************************ 00:07:58.974 START TEST version 00:07:58.974 ************************************ 00:07:58.974 01:29:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:59.234 * Looking for test storage... 00:07:59.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.234 01:29:12 -- app/version.sh@17 -- # get_header_version major 00:07:59.234 01:29:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.234 01:29:12 -- app/version.sh@14 -- # cut -f2 00:07:59.234 01:29:12 -- app/version.sh@14 -- # tr -d '"' 00:07:59.234 01:29:12 -- app/version.sh@17 -- # major=24 00:07:59.234 01:29:12 -- app/version.sh@18 -- # get_header_version minor 00:07:59.234 01:29:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.234 01:29:12 -- app/version.sh@14 -- # cut -f2 00:07:59.234 01:29:12 -- app/version.sh@14 -- # tr -d '"' 00:07:59.234 01:29:12 -- app/version.sh@18 -- # minor=1 00:07:59.234 01:29:12 -- app/version.sh@19 -- # get_header_version patch 00:07:59.234 01:29:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.234 01:29:12 -- app/version.sh@14 -- # cut -f2 00:07:59.234 01:29:12 -- app/version.sh@14 -- # tr -d '"' 00:07:59.234 01:29:12 -- app/version.sh@19 -- # patch=1 00:07:59.234 01:29:12 -- app/version.sh@20 -- # get_header_version suffix 00:07:59.234 01:29:12 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:59.234 01:29:12 -- app/version.sh@14 -- # cut -f2 00:07:59.234 01:29:12 -- app/version.sh@14 -- # tr -d '"' 00:07:59.234 01:29:12 -- app/version.sh@20 -- # suffix=-pre 00:07:59.234 01:29:12 -- app/version.sh@22 -- # version=24.1 00:07:59.234 01:29:12 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:59.234 01:29:12 -- app/version.sh@25 -- # version=24.1.1 00:07:59.234 01:29:12 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:59.234 01:29:12 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:59.234 01:29:12 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:59.234 01:29:12 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:59.234 01:29:12 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:59.234 00:07:59.234 real 0m0.099s 00:07:59.234 user 0m0.054s 00:07:59.234 sys 0m0.067s 00:07:59.234 01:29:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.234 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.234 ************************************ 00:07:59.234 END TEST version 00:07:59.234 ************************************ 00:07:59.234 01:29:12 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@204 -- # uname -s 00:07:59.234 01:29:12 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:59.234 01:29:12 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:59.234 01:29:12 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:59.234 01:29:12 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:59.234 01:29:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.234 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.234 01:29:12 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:59.234 01:29:12 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:59.234 01:29:12 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.234 01:29:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:59.234 01:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.234 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.234 ************************************ 00:07:59.234 START TEST nvmf_tcp 00:07:59.234 ************************************ 00:07:59.234 01:29:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:59.234 * Looking for test storage... 00:07:59.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:59.234 01:29:12 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:59.234 01:29:12 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:59.234 01:29:12 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.234 01:29:12 -- nvmf/common.sh@7 -- # uname -s 00:07:59.234 01:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.234 01:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.234 01:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.234 01:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.234 01:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.234 01:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.234 01:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.234 01:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.234 01:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.234 01:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.234 01:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.234 01:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.234 01:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.234 01:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.234 01:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.234 01:29:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.234 01:29:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.234 01:29:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.234 01:29:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.234 01:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.234 01:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.234 01:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.234 01:29:12 -- paths/export.sh@5 -- # export PATH 00:07:59.234 01:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.234 01:29:12 -- nvmf/common.sh@46 -- # : 0 00:07:59.234 01:29:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.234 01:29:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.234 01:29:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.234 01:29:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.234 01:29:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.234 01:29:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.234 01:29:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.234 01:29:12 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:59.235 01:29:12 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:59.235 01:29:12 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:59.235 01:29:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.235 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.235 01:29:12 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:59.235 01:29:12 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.235 01:29:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:59.235 01:29:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.235 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.235 ************************************ 00:07:59.235 START TEST nvmf_example 00:07:59.235 ************************************ 00:07:59.235 01:29:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:59.235 * Looking for test storage... 00:07:59.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:59.235 01:29:12 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:59.235 01:29:12 -- nvmf/common.sh@7 -- # uname -s 00:07:59.235 01:29:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:59.235 01:29:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:59.235 01:29:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:59.235 01:29:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:59.235 01:29:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:59.235 01:29:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:59.235 01:29:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:59.235 01:29:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:59.235 01:29:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:59.235 01:29:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:59.235 01:29:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.235 01:29:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:59.235 01:29:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:59.235 01:29:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:59.235 01:29:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:59.235 01:29:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:59.235 01:29:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:59.235 01:29:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:59.235 01:29:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:59.235 01:29:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.235 01:29:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.235 01:29:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.235 01:29:12 -- paths/export.sh@5 -- # export PATH 00:07:59.235 01:29:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:59.235 01:29:12 -- nvmf/common.sh@46 -- # : 0 00:07:59.235 01:29:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:59.235 01:29:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:59.235 01:29:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:59.235 01:29:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:59.235 01:29:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:59.235 01:29:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:59.235 01:29:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:59.235 01:29:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:59.235 01:29:12 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:59.235 01:29:12 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:59.235 01:29:12 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:59.235 01:29:12 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:59.235 01:29:12 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:59.235 01:29:12 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:59.235 01:29:12 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:59.235 01:29:12 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:59.235 01:29:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:59.235 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:07:59.235 01:29:12 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:59.235 01:29:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:59.235 01:29:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:59.235 01:29:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:59.235 01:29:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:59.235 01:29:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:59.235 01:29:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.235 01:29:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.235 01:29:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:59.494 01:29:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:59.494 01:29:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:59.494 01:29:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:59.494 01:29:12 -- common/autotest_common.sh@10 -- # set +x 00:08:01.397 01:29:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:01.397 01:29:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:01.397 01:29:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:01.397 01:29:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:01.397 01:29:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:01.397 01:29:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:01.397 01:29:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:01.397 01:29:14 -- nvmf/common.sh@294 -- # net_devs=() 00:08:01.397 01:29:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:01.397 01:29:14 -- nvmf/common.sh@295 -- # e810=() 00:08:01.397 01:29:14 -- nvmf/common.sh@295 -- # local -ga e810 00:08:01.397 01:29:14 -- nvmf/common.sh@296 -- # x722=() 00:08:01.397 01:29:14 -- nvmf/common.sh@296 -- # local -ga x722 00:08:01.397 01:29:14 -- nvmf/common.sh@297 -- # mlx=() 00:08:01.397 01:29:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:01.397 01:29:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:01.397 01:29:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:01.397 01:29:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:01.397 01:29:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.397 01:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:01.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:01.397 01:29:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:01.397 01:29:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:01.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:01.397 01:29:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.397 01:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.397 01:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.397 01:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:01.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:01.397 01:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.397 01:29:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:01.397 01:29:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:01.397 01:29:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:01.397 01:29:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:01.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:01.397 01:29:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:01.397 01:29:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:01.397 01:29:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:01.397 01:29:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:01.397 01:29:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:01.397 01:29:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:01.397 01:29:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:01.397 01:29:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:01.397 01:29:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:01.397 01:29:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:01.397 01:29:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:01.397 01:29:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:01.397 01:29:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:01.397 01:29:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:01.397 01:29:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:01.397 01:29:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:01.397 01:29:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:01.397 01:29:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:01.397 01:29:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:01.397 01:29:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:01.655 01:29:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:01.655 01:29:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:01.655 01:29:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:01.655 01:29:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:01.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:01.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:08:01.655 00:08:01.655 --- 10.0.0.2 ping statistics --- 00:08:01.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.655 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:01.655 01:29:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:01.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:01.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:01.655 00:08:01.655 --- 10.0.0.1 ping statistics --- 00:08:01.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:01.655 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:01.655 01:29:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:01.655 01:29:14 -- nvmf/common.sh@410 -- # return 0 00:08:01.655 01:29:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:01.655 01:29:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:01.655 01:29:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:01.655 01:29:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:01.655 01:29:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:01.655 01:29:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:01.655 01:29:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:01.655 01:29:14 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:01.655 01:29:14 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:01.655 01:29:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.655 01:29:14 -- common/autotest_common.sh@10 -- # set +x 00:08:01.655 01:29:14 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:01.655 01:29:14 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:01.655 01:29:14 -- target/nvmf_example.sh@34 -- # nvmfpid=3674730 00:08:01.655 01:29:14 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:01.655 01:29:14 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.655 01:29:14 -- target/nvmf_example.sh@36 -- # waitforlisten 3674730 00:08:01.655 01:29:14 -- common/autotest_common.sh@819 -- # '[' -z 3674730 ']' 00:08:01.655 01:29:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.655 01:29:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:01.655 01:29:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.655 01:29:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:01.655 01:29:14 -- common/autotest_common.sh@10 -- # set +x 00:08:01.655 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.590 01:29:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:02.590 01:29:15 -- common/autotest_common.sh@852 -- # return 0 00:08:02.590 01:29:15 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:02.590 01:29:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:02.590 01:29:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.590 01:29:15 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:02.590 01:29:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.590 01:29:15 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:02.590 01:29:15 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:02.590 01:29:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.590 01:29:15 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:02.590 01:29:15 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:02.590 01:29:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.590 01:29:15 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:02.590 01:29:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:02.590 01:29:15 -- common/autotest_common.sh@10 -- # set +x 00:08:02.590 01:29:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:02.590 01:29:15 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:02.590 01:29:15 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:02.590 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.811 Initializing NVMe Controllers 00:08:14.811 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.811 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.811 Initialization complete. Launching workers. 00:08:14.811 ======================================================== 00:08:14.811 Latency(us) 00:08:14.811 Device Information : IOPS MiB/s Average min max 00:08:14.811 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14003.50 54.70 4569.88 657.68 17228.41 00:08:14.811 ======================================================== 00:08:14.811 Total : 14003.50 54.70 4569.88 657.68 17228.41 00:08:14.811 00:08:14.811 01:29:25 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:14.811 01:29:25 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:14.811 01:29:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:14.811 01:29:25 -- nvmf/common.sh@116 -- # sync 00:08:14.811 01:29:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:14.811 01:29:25 -- nvmf/common.sh@119 -- # set +e 00:08:14.811 01:29:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:14.811 01:29:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:14.811 rmmod nvme_tcp 00:08:14.811 rmmod nvme_fabrics 00:08:14.811 rmmod nvme_keyring 00:08:14.811 01:29:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:14.811 01:29:26 -- nvmf/common.sh@123 -- # set -e 00:08:14.811 01:29:26 -- nvmf/common.sh@124 -- # return 0 00:08:14.811 01:29:26 -- nvmf/common.sh@477 -- # '[' -n 3674730 ']' 00:08:14.811 01:29:26 -- nvmf/common.sh@478 -- # killprocess 3674730 00:08:14.811 01:29:26 -- common/autotest_common.sh@926 -- # '[' -z 3674730 ']' 00:08:14.811 01:29:26 -- common/autotest_common.sh@930 -- # kill -0 3674730 00:08:14.811 01:29:26 -- common/autotest_common.sh@931 -- # uname 00:08:14.811 01:29:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:14.811 01:29:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3674730 00:08:14.811 01:29:26 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:14.811 01:29:26 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:14.811 01:29:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3674730' 00:08:14.811 killing process with pid 3674730 00:08:14.811 01:29:26 -- common/autotest_common.sh@945 -- # kill 3674730 00:08:14.811 01:29:26 -- common/autotest_common.sh@950 -- # wait 3674730 00:08:14.811 nvmf threads initialize successfully 00:08:14.811 bdev subsystem init successfully 00:08:14.811 created a nvmf target service 00:08:14.811 create targets's poll groups done 00:08:14.811 all subsystems of target started 00:08:14.811 nvmf target is running 00:08:14.811 all subsystems of target stopped 00:08:14.811 destroy targets's poll groups done 00:08:14.811 destroyed the nvmf target service 00:08:14.811 bdev subsystem finish successfully 00:08:14.811 nvmf threads destroy successfully 00:08:14.811 01:29:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:14.811 01:29:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:14.811 01:29:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:14.811 01:29:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.811 01:29:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:14.811 01:29:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.811 01:29:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.811 01:29:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.380 01:29:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:15.380 01:29:28 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:15.380 01:29:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:15.380 01:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.380 00:08:15.380 real 0m16.116s 00:08:15.380 user 0m40.857s 00:08:15.380 sys 0m4.928s 00:08:15.380 01:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.380 01:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.380 ************************************ 00:08:15.380 END TEST nvmf_example 00:08:15.380 ************************************ 00:08:15.380 01:29:28 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.380 01:29:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:15.380 01:29:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.380 01:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:15.380 ************************************ 00:08:15.380 START TEST nvmf_filesystem 00:08:15.380 ************************************ 00:08:15.380 01:29:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.380 * Looking for test storage... 00:08:15.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.380 01:29:28 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:15.380 01:29:28 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:15.380 01:29:28 -- common/autotest_common.sh@34 -- # set -e 00:08:15.380 01:29:28 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:15.380 01:29:28 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:15.380 01:29:28 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:15.380 01:29:28 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:15.380 01:29:28 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:15.380 01:29:28 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:15.380 01:29:28 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:15.380 01:29:28 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:15.380 01:29:28 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:15.380 01:29:28 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:15.380 01:29:28 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:15.380 01:29:28 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:15.380 01:29:28 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:15.380 01:29:28 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:15.380 01:29:28 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:15.380 01:29:28 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:15.380 01:29:28 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:15.380 01:29:28 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:15.380 01:29:28 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:15.380 01:29:28 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:15.380 01:29:28 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:15.380 01:29:28 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:15.380 01:29:28 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.380 01:29:28 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:15.380 01:29:28 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:15.380 01:29:28 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:15.380 01:29:28 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:15.380 01:29:28 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:15.380 01:29:28 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:15.380 01:29:28 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:15.380 01:29:28 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:15.380 01:29:28 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:15.380 01:29:28 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:15.380 01:29:28 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:15.381 01:29:28 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:15.381 01:29:28 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:15.381 01:29:28 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:15.381 01:29:28 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:15.381 01:29:28 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:15.381 01:29:28 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.381 01:29:28 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:15.381 01:29:28 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:15.381 01:29:28 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:15.381 01:29:28 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:15.381 01:29:28 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.381 01:29:28 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:15.381 01:29:28 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:15.381 01:29:28 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:15.381 01:29:28 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:15.381 01:29:28 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:15.381 01:29:28 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:15.381 01:29:28 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:15.381 01:29:28 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:15.381 01:29:28 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:15.381 01:29:28 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:15.381 01:29:28 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:15.381 01:29:28 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:15.381 01:29:28 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:15.381 01:29:28 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:15.381 01:29:28 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:15.381 01:29:28 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:15.381 01:29:28 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:15.381 01:29:28 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:15.381 01:29:28 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:15.381 01:29:28 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.381 01:29:28 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:15.381 01:29:28 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:15.381 01:29:28 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:15.381 01:29:28 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:15.381 01:29:28 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:15.381 01:29:28 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:15.381 01:29:28 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:15.381 01:29:28 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:15.641 01:29:28 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:15.641 01:29:28 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:15.641 01:29:28 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:15.641 01:29:28 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:15.641 01:29:28 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:15.641 01:29:28 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:15.641 01:29:28 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:15.641 01:29:28 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:15.641 01:29:28 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:15.641 01:29:28 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:15.641 01:29:28 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.641 01:29:28 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:15.641 01:29:28 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.641 01:29:28 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:15.641 01:29:28 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.641 01:29:28 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.641 01:29:28 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:15.641 01:29:28 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.641 01:29:28 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:15.641 01:29:28 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:15.641 01:29:28 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:15.641 01:29:28 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:15.641 01:29:28 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:15.641 01:29:28 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:15.641 01:29:28 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:15.641 01:29:28 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:15.641 #define SPDK_CONFIG_H 00:08:15.641 #define SPDK_CONFIG_APPS 1 00:08:15.641 #define SPDK_CONFIG_ARCH native 00:08:15.641 #undef SPDK_CONFIG_ASAN 00:08:15.641 #undef SPDK_CONFIG_AVAHI 00:08:15.641 #undef SPDK_CONFIG_CET 00:08:15.641 #define SPDK_CONFIG_COVERAGE 1 00:08:15.641 #define SPDK_CONFIG_CROSS_PREFIX 00:08:15.641 #undef SPDK_CONFIG_CRYPTO 00:08:15.641 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:15.641 #undef SPDK_CONFIG_CUSTOMOCF 00:08:15.641 #undef SPDK_CONFIG_DAOS 00:08:15.641 #define SPDK_CONFIG_DAOS_DIR 00:08:15.642 #define SPDK_CONFIG_DEBUG 1 00:08:15.642 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:15.642 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.642 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:15.642 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.642 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:15.642 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:15.642 #define SPDK_CONFIG_EXAMPLES 1 00:08:15.642 #undef SPDK_CONFIG_FC 00:08:15.642 #define SPDK_CONFIG_FC_PATH 00:08:15.642 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:15.642 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:15.642 #undef SPDK_CONFIG_FUSE 00:08:15.642 #undef SPDK_CONFIG_FUZZER 00:08:15.642 #define SPDK_CONFIG_FUZZER_LIB 00:08:15.642 #undef SPDK_CONFIG_GOLANG 00:08:15.642 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:15.642 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:15.642 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:15.642 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:15.642 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:15.642 #define SPDK_CONFIG_IDXD 1 00:08:15.642 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:15.642 #undef SPDK_CONFIG_IPSEC_MB 00:08:15.642 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:15.642 #define SPDK_CONFIG_ISAL 1 00:08:15.642 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:15.642 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:15.642 #define SPDK_CONFIG_LIBDIR 00:08:15.642 #undef SPDK_CONFIG_LTO 00:08:15.642 #define SPDK_CONFIG_MAX_LCORES 00:08:15.642 #define SPDK_CONFIG_NVME_CUSE 1 00:08:15.642 #undef SPDK_CONFIG_OCF 00:08:15.642 #define SPDK_CONFIG_OCF_PATH 00:08:15.642 #define SPDK_CONFIG_OPENSSL_PATH 00:08:15.642 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:15.642 #undef SPDK_CONFIG_PGO_USE 00:08:15.642 #define SPDK_CONFIG_PREFIX /usr/local 00:08:15.642 #undef SPDK_CONFIG_RAID5F 00:08:15.642 #undef SPDK_CONFIG_RBD 00:08:15.642 #define SPDK_CONFIG_RDMA 1 00:08:15.642 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:15.642 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:15.642 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:15.642 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:15.642 #define SPDK_CONFIG_SHARED 1 00:08:15.642 #undef SPDK_CONFIG_SMA 00:08:15.642 #define SPDK_CONFIG_TESTS 1 00:08:15.642 #undef SPDK_CONFIG_TSAN 00:08:15.642 #define SPDK_CONFIG_UBLK 1 00:08:15.642 #define SPDK_CONFIG_UBSAN 1 00:08:15.642 #undef SPDK_CONFIG_UNIT_TESTS 00:08:15.642 #undef SPDK_CONFIG_URING 00:08:15.642 #define SPDK_CONFIG_URING_PATH 00:08:15.642 #undef SPDK_CONFIG_URING_ZNS 00:08:15.642 #undef SPDK_CONFIG_USDT 00:08:15.642 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:15.642 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:15.642 #define SPDK_CONFIG_VFIO_USER 1 00:08:15.642 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:15.642 #define SPDK_CONFIG_VHOST 1 00:08:15.642 #define SPDK_CONFIG_VIRTIO 1 00:08:15.642 #undef SPDK_CONFIG_VTUNE 00:08:15.642 #define SPDK_CONFIG_VTUNE_DIR 00:08:15.642 #define SPDK_CONFIG_WERROR 1 00:08:15.642 #define SPDK_CONFIG_WPDK_DIR 00:08:15.642 #undef SPDK_CONFIG_XNVME 00:08:15.642 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:15.642 01:29:28 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:15.642 01:29:28 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.642 01:29:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.642 01:29:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.642 01:29:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.642 01:29:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.642 01:29:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.642 01:29:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.642 01:29:28 -- paths/export.sh@5 -- # export PATH 00:08:15.642 01:29:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.642 01:29:28 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.642 01:29:28 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:15.642 01:29:28 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.642 01:29:28 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:15.642 01:29:28 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:15.642 01:29:28 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:15.642 01:29:28 -- pm/common@16 -- # TEST_TAG=N/A 00:08:15.642 01:29:28 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:15.642 01:29:28 -- common/autotest_common.sh@52 -- # : 1 00:08:15.642 01:29:28 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:15.642 01:29:28 -- common/autotest_common.sh@56 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:15.642 01:29:28 -- common/autotest_common.sh@58 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:15.642 01:29:28 -- common/autotest_common.sh@60 -- # : 1 00:08:15.642 01:29:28 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:15.642 01:29:28 -- common/autotest_common.sh@62 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:15.642 01:29:28 -- common/autotest_common.sh@64 -- # : 00:08:15.642 01:29:28 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:15.642 01:29:28 -- common/autotest_common.sh@66 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:15.642 01:29:28 -- common/autotest_common.sh@68 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:15.642 01:29:28 -- common/autotest_common.sh@70 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:15.642 01:29:28 -- common/autotest_common.sh@72 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:15.642 01:29:28 -- common/autotest_common.sh@74 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:15.642 01:29:28 -- common/autotest_common.sh@76 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:15.642 01:29:28 -- common/autotest_common.sh@78 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:15.642 01:29:28 -- common/autotest_common.sh@80 -- # : 1 00:08:15.642 01:29:28 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:15.642 01:29:28 -- common/autotest_common.sh@82 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:15.642 01:29:28 -- common/autotest_common.sh@84 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:15.642 01:29:28 -- common/autotest_common.sh@86 -- # : 1 00:08:15.642 01:29:28 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:15.642 01:29:28 -- common/autotest_common.sh@88 -- # : 1 00:08:15.642 01:29:28 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:15.642 01:29:28 -- common/autotest_common.sh@90 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:15.642 01:29:28 -- common/autotest_common.sh@92 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:15.642 01:29:28 -- common/autotest_common.sh@94 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:15.642 01:29:28 -- common/autotest_common.sh@96 -- # : tcp 00:08:15.642 01:29:28 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:15.642 01:29:28 -- common/autotest_common.sh@98 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:15.642 01:29:28 -- common/autotest_common.sh@100 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:15.642 01:29:28 -- common/autotest_common.sh@102 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:15.642 01:29:28 -- common/autotest_common.sh@104 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:15.642 01:29:28 -- common/autotest_common.sh@106 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:15.642 01:29:28 -- common/autotest_common.sh@108 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:15.642 01:29:28 -- common/autotest_common.sh@110 -- # : 0 00:08:15.642 01:29:28 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:15.642 01:29:28 -- common/autotest_common.sh@112 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:15.643 01:29:28 -- common/autotest_common.sh@114 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:15.643 01:29:28 -- common/autotest_common.sh@116 -- # : 1 00:08:15.643 01:29:28 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:15.643 01:29:28 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:15.643 01:29:28 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:15.643 01:29:28 -- common/autotest_common.sh@120 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:15.643 01:29:28 -- common/autotest_common.sh@122 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:15.643 01:29:28 -- common/autotest_common.sh@124 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:15.643 01:29:28 -- common/autotest_common.sh@126 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:15.643 01:29:28 -- common/autotest_common.sh@128 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:15.643 01:29:28 -- common/autotest_common.sh@130 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:15.643 01:29:28 -- common/autotest_common.sh@132 -- # : v23.11 00:08:15.643 01:29:28 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:15.643 01:29:28 -- common/autotest_common.sh@134 -- # : true 00:08:15.643 01:29:28 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:15.643 01:29:28 -- common/autotest_common.sh@136 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:15.643 01:29:28 -- common/autotest_common.sh@138 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:15.643 01:29:28 -- common/autotest_common.sh@140 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:15.643 01:29:28 -- common/autotest_common.sh@142 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:15.643 01:29:28 -- common/autotest_common.sh@144 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:15.643 01:29:28 -- common/autotest_common.sh@146 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:15.643 01:29:28 -- common/autotest_common.sh@148 -- # : e810 00:08:15.643 01:29:28 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:15.643 01:29:28 -- common/autotest_common.sh@150 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:15.643 01:29:28 -- common/autotest_common.sh@152 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:15.643 01:29:28 -- common/autotest_common.sh@154 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:15.643 01:29:28 -- common/autotest_common.sh@156 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:15.643 01:29:28 -- common/autotest_common.sh@158 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:15.643 01:29:28 -- common/autotest_common.sh@160 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:15.643 01:29:28 -- common/autotest_common.sh@163 -- # : 00:08:15.643 01:29:28 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:15.643 01:29:28 -- common/autotest_common.sh@165 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:15.643 01:29:28 -- common/autotest_common.sh@167 -- # : 0 00:08:15.643 01:29:28 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:15.643 01:29:28 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:15.643 01:29:28 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.643 01:29:28 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:15.643 01:29:28 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.643 01:29:28 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:15.643 01:29:28 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:15.643 01:29:28 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:15.643 01:29:28 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.643 01:29:28 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:15.643 01:29:28 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.643 01:29:28 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:15.643 01:29:28 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:15.643 01:29:28 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:15.643 01:29:28 -- common/autotest_common.sh@196 -- # cat 00:08:15.643 01:29:28 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:15.643 01:29:28 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.643 01:29:28 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:15.643 01:29:28 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.643 01:29:28 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:15.643 01:29:28 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:15.643 01:29:28 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:15.643 01:29:28 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.643 01:29:28 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:15.643 01:29:28 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.643 01:29:28 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:15.643 01:29:28 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.643 01:29:28 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:15.643 01:29:28 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.643 01:29:28 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:15.643 01:29:28 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.643 01:29:28 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:15.643 01:29:28 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.643 01:29:28 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:15.643 01:29:28 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:15.643 01:29:28 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:15.643 01:29:28 -- common/autotest_common.sh@249 -- # valgrind= 00:08:15.643 01:29:28 -- common/autotest_common.sh@255 -- # uname -s 00:08:15.643 01:29:28 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:15.643 01:29:28 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:15.644 01:29:28 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:15.644 01:29:28 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:15.644 01:29:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:15.644 01:29:28 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:15.644 01:29:28 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:15.644 01:29:28 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:15.644 01:29:28 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:15.644 01:29:28 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:15.644 01:29:28 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:15.644 01:29:28 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:15.644 01:29:28 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:15.644 01:29:28 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:15.644 01:29:28 -- common/autotest_common.sh@309 -- # [[ -z 3676489 ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@309 -- # kill -0 3676489 00:08:15.644 01:29:28 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:15.644 01:29:28 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:15.644 01:29:28 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:15.644 01:29:28 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:15.644 01:29:28 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:15.644 01:29:28 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:15.644 01:29:28 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:15.644 01:29:28 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.4gLbSc 00:08:15.644 01:29:28 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:15.644 01:29:28 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4gLbSc/tests/target /tmp/spdk.4gLbSc 00:08:15.644 01:29:28 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@318 -- # df -T 00:08:15.644 01:29:28 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=52989722624 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=9004986368 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996357120 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=999424 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:08:15.644 01:29:28 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:08:15.644 01:29:28 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:15.644 01:29:28 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:15.644 01:29:28 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:15.644 * Looking for test storage... 00:08:15.644 01:29:28 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:15.644 01:29:28 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:15.644 01:29:28 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.644 01:29:28 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:15.644 01:29:28 -- common/autotest_common.sh@363 -- # mount=/ 00:08:15.644 01:29:28 -- common/autotest_common.sh@365 -- # target_space=52989722624 00:08:15.644 01:29:28 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:15.644 01:29:28 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:15.644 01:29:28 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@372 -- # new_size=11219578880 00:08:15.644 01:29:28 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:15.644 01:29:28 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.644 01:29:28 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.644 01:29:28 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.644 01:29:28 -- common/autotest_common.sh@380 -- # return 0 00:08:15.644 01:29:28 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:15.644 01:29:28 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:15.644 01:29:28 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:15.644 01:29:28 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:15.644 01:29:28 -- common/autotest_common.sh@1672 -- # true 00:08:15.644 01:29:28 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:15.644 01:29:28 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:15.644 01:29:28 -- common/autotest_common.sh@27 -- # exec 00:08:15.644 01:29:28 -- common/autotest_common.sh@29 -- # exec 00:08:15.644 01:29:28 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:15.644 01:29:28 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:15.644 01:29:28 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:15.644 01:29:28 -- common/autotest_common.sh@18 -- # set -x 00:08:15.644 01:29:28 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.644 01:29:28 -- nvmf/common.sh@7 -- # uname -s 00:08:15.644 01:29:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.644 01:29:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.644 01:29:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.644 01:29:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.644 01:29:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.644 01:29:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.644 01:29:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.644 01:29:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.644 01:29:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.644 01:29:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.644 01:29:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.644 01:29:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:15.644 01:29:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.644 01:29:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.644 01:29:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.644 01:29:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.644 01:29:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.644 01:29:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.644 01:29:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.645 01:29:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.645 01:29:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.645 01:29:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.645 01:29:28 -- paths/export.sh@5 -- # export PATH 00:08:15.645 01:29:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.645 01:29:28 -- nvmf/common.sh@46 -- # : 0 00:08:15.645 01:29:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:15.645 01:29:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:15.645 01:29:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:15.645 01:29:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.645 01:29:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.645 01:29:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:15.645 01:29:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:15.645 01:29:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:15.645 01:29:28 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:15.645 01:29:28 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:15.645 01:29:28 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:15.645 01:29:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:15.645 01:29:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.645 01:29:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:15.645 01:29:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:15.645 01:29:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:15.645 01:29:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.645 01:29:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.645 01:29:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.645 01:29:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:15.645 01:29:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:15.645 01:29:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:15.645 01:29:28 -- common/autotest_common.sh@10 -- # set +x 00:08:17.550 01:29:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:17.550 01:29:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:17.550 01:29:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:17.550 01:29:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:17.550 01:29:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:17.550 01:29:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:17.550 01:29:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:17.550 01:29:30 -- nvmf/common.sh@294 -- # net_devs=() 00:08:17.550 01:29:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:17.550 01:29:30 -- nvmf/common.sh@295 -- # e810=() 00:08:17.550 01:29:30 -- nvmf/common.sh@295 -- # local -ga e810 00:08:17.550 01:29:30 -- nvmf/common.sh@296 -- # x722=() 00:08:17.550 01:29:30 -- nvmf/common.sh@296 -- # local -ga x722 00:08:17.550 01:29:30 -- nvmf/common.sh@297 -- # mlx=() 00:08:17.550 01:29:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:17.550 01:29:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:17.550 01:29:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:17.550 01:29:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:17.550 01:29:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:17.550 01:29:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:17.550 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:17.550 01:29:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:17.550 01:29:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:17.550 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:17.550 01:29:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:17.550 01:29:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.550 01:29:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.550 01:29:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:17.550 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:17.550 01:29:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.550 01:29:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:17.550 01:29:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:17.550 01:29:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:17.550 01:29:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:17.550 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:17.550 01:29:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:17.550 01:29:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:17.550 01:29:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:17.550 01:29:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:17.550 01:29:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:17.550 01:29:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:17.550 01:29:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:17.550 01:29:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:17.550 01:29:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:17.550 01:29:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:17.550 01:29:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:17.550 01:29:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:17.550 01:29:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:17.550 01:29:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:17.550 01:29:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:17.550 01:29:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:17.550 01:29:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:17.808 01:29:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:17.809 01:29:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:17.809 01:29:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:17.809 01:29:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:17.809 01:29:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:17.809 01:29:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:17.809 01:29:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:17.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:17.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:08:17.809 00:08:17.809 --- 10.0.0.2 ping statistics --- 00:08:17.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.809 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:08:17.809 01:29:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:17.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:17.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:08:17.809 00:08:17.809 --- 10.0.0.1 ping statistics --- 00:08:17.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:17.809 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:08:17.809 01:29:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:17.809 01:29:30 -- nvmf/common.sh@410 -- # return 0 00:08:17.809 01:29:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:17.809 01:29:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:17.809 01:29:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:17.809 01:29:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:17.809 01:29:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:17.809 01:29:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:17.809 01:29:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:17.809 01:29:30 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:17.809 01:29:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.809 01:29:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.809 01:29:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 ************************************ 00:08:17.809 START TEST nvmf_filesystem_no_in_capsule 00:08:17.809 ************************************ 00:08:17.809 01:29:30 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:17.809 01:29:30 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:17.809 01:29:30 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:17.809 01:29:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:17.809 01:29:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:17.809 01:29:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 01:29:30 -- nvmf/common.sh@469 -- # nvmfpid=3678113 00:08:17.809 01:29:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:17.809 01:29:30 -- nvmf/common.sh@470 -- # waitforlisten 3678113 00:08:17.809 01:29:30 -- common/autotest_common.sh@819 -- # '[' -z 3678113 ']' 00:08:17.809 01:29:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.809 01:29:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:17.809 01:29:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.809 01:29:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:17.809 01:29:30 -- common/autotest_common.sh@10 -- # set +x 00:08:17.809 [2024-07-23 01:29:30.801957] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:17.809 [2024-07-23 01:29:30.802041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:17.809 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.809 [2024-07-23 01:29:30.872654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.068 [2024-07-23 01:29:30.969674] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:18.068 [2024-07-23 01:29:30.969834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.068 [2024-07-23 01:29:30.969854] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.068 [2024-07-23 01:29:30.969868] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.068 [2024-07-23 01:29:30.969958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.068 [2024-07-23 01:29:30.970026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.068 [2024-07-23 01:29:30.970092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.068 [2024-07-23 01:29:30.970095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.008 01:29:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:19.008 01:29:31 -- common/autotest_common.sh@852 -- # return 0 00:08:19.008 01:29:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:19.008 01:29:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 01:29:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:19.008 01:29:31 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:19.008 01:29:31 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 [2024-07-23 01:29:31.770131] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 [2024-07-23 01:29:31.957012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:19.008 01:29:31 -- common/autotest_common.sh@1359 -- # local bs 00:08:19.008 01:29:31 -- common/autotest_common.sh@1360 -- # local nb 00:08:19.008 01:29:31 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:19.008 01:29:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.008 01:29:31 -- common/autotest_common.sh@10 -- # set +x 00:08:19.008 01:29:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.008 01:29:31 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:19.008 { 00:08:19.008 "name": "Malloc1", 00:08:19.008 "aliases": [ 00:08:19.008 "e14dd62c-7d34-4d98-9a10-286966c9d852" 00:08:19.008 ], 00:08:19.008 "product_name": "Malloc disk", 00:08:19.008 "block_size": 512, 00:08:19.008 "num_blocks": 1048576, 00:08:19.008 "uuid": "e14dd62c-7d34-4d98-9a10-286966c9d852", 00:08:19.008 "assigned_rate_limits": { 00:08:19.008 "rw_ios_per_sec": 0, 00:08:19.008 "rw_mbytes_per_sec": 0, 00:08:19.008 "r_mbytes_per_sec": 0, 00:08:19.008 "w_mbytes_per_sec": 0 00:08:19.008 }, 00:08:19.008 "claimed": true, 00:08:19.008 "claim_type": "exclusive_write", 00:08:19.008 "zoned": false, 00:08:19.008 "supported_io_types": { 00:08:19.008 "read": true, 00:08:19.008 "write": true, 00:08:19.008 "unmap": true, 00:08:19.008 "write_zeroes": true, 00:08:19.008 "flush": true, 00:08:19.008 "reset": true, 00:08:19.008 "compare": false, 00:08:19.008 "compare_and_write": false, 00:08:19.008 "abort": true, 00:08:19.008 "nvme_admin": false, 00:08:19.008 "nvme_io": false 00:08:19.008 }, 00:08:19.008 "memory_domains": [ 00:08:19.008 { 00:08:19.008 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.008 "dma_device_type": 2 00:08:19.008 } 00:08:19.008 ], 00:08:19.008 "driver_specific": {} 00:08:19.008 } 00:08:19.008 ]' 00:08:19.008 01:29:31 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:19.008 01:29:32 -- common/autotest_common.sh@1362 -- # bs=512 00:08:19.008 01:29:32 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:19.008 01:29:32 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:19.008 01:29:32 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:19.008 01:29:32 -- common/autotest_common.sh@1367 -- # echo 512 00:08:19.008 01:29:32 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:19.008 01:29:32 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:19.576 01:29:32 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:19.576 01:29:32 -- common/autotest_common.sh@1177 -- # local i=0 00:08:19.576 01:29:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:19.576 01:29:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:19.576 01:29:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:22.114 01:29:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:22.114 01:29:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:22.114 01:29:34 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:22.114 01:29:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:22.114 01:29:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:22.114 01:29:34 -- common/autotest_common.sh@1187 -- # return 0 00:08:22.114 01:29:34 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:22.114 01:29:34 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:22.114 01:29:34 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:22.114 01:29:34 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:22.114 01:29:34 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:22.114 01:29:34 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:22.114 01:29:34 -- setup/common.sh@80 -- # echo 536870912 00:08:22.114 01:29:34 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:22.114 01:29:34 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:22.114 01:29:34 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:22.114 01:29:34 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:22.114 01:29:35 -- target/filesystem.sh@69 -- # partprobe 00:08:22.710 01:29:35 -- target/filesystem.sh@70 -- # sleep 1 00:08:24.088 01:29:36 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:24.088 01:29:36 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:24.088 01:29:36 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:24.088 01:29:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.088 01:29:36 -- common/autotest_common.sh@10 -- # set +x 00:08:24.088 ************************************ 00:08:24.088 START TEST filesystem_ext4 00:08:24.088 ************************************ 00:08:24.088 01:29:36 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:24.088 01:29:36 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:24.088 01:29:36 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:24.088 01:29:36 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:24.088 01:29:36 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:24.088 01:29:36 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:24.088 01:29:36 -- common/autotest_common.sh@904 -- # local i=0 00:08:24.088 01:29:36 -- common/autotest_common.sh@905 -- # local force 00:08:24.088 01:29:36 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:24.088 01:29:36 -- common/autotest_common.sh@908 -- # force=-F 00:08:24.088 01:29:36 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:24.088 mke2fs 1.46.5 (30-Dec-2021) 00:08:24.088 Discarding device blocks: 0/522240 done 00:08:24.089 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:24.089 Filesystem UUID: 73d51d3d-2bea-4502-93d5-ae9c4ff4e0cc 00:08:24.089 Superblock backups stored on blocks: 00:08:24.089 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:24.089 00:08:24.089 Allocating group tables: 0/64 done 00:08:24.089 Writing inode tables: 0/64 done 00:08:24.655 Creating journal (8192 blocks): done 00:08:24.655 Writing superblocks and filesystem accounting information: 0/64 done 00:08:24.655 00:08:24.655 01:29:37 -- common/autotest_common.sh@921 -- # return 0 00:08:24.655 01:29:37 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.593 01:29:38 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.593 01:29:38 -- target/filesystem.sh@25 -- # sync 00:08:25.593 01:29:38 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.593 01:29:38 -- target/filesystem.sh@27 -- # sync 00:08:25.593 01:29:38 -- target/filesystem.sh@29 -- # i=0 00:08:25.593 01:29:38 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.593 01:29:38 -- target/filesystem.sh@37 -- # kill -0 3678113 00:08:25.593 01:29:38 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.593 01:29:38 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.593 01:29:38 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.593 01:29:38 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.593 00:08:25.593 real 0m1.879s 00:08:25.593 user 0m0.017s 00:08:25.593 sys 0m0.059s 00:08:25.593 01:29:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.593 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.593 ************************************ 00:08:25.593 END TEST filesystem_ext4 00:08:25.593 ************************************ 00:08:25.593 01:29:38 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.593 01:29:38 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.593 01:29:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.593 01:29:38 -- common/autotest_common.sh@10 -- # set +x 00:08:25.851 ************************************ 00:08:25.851 START TEST filesystem_btrfs 00:08:25.851 ************************************ 00:08:25.851 01:29:38 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.851 01:29:38 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.851 01:29:38 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.851 01:29:38 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.851 01:29:38 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:25.851 01:29:38 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.851 01:29:38 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.851 01:29:38 -- common/autotest_common.sh@905 -- # local force 00:08:25.851 01:29:38 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:25.851 01:29:38 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.851 01:29:38 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.851 btrfs-progs v6.6.2 00:08:25.851 See https://btrfs.readthedocs.io for more information. 00:08:25.851 00:08:25.851 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.851 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.851 this does not affect your deployments: 00:08:25.851 - DUP for metadata (-m dup) 00:08:25.851 - enabled no-holes (-O no-holes) 00:08:25.851 - enabled free-space-tree (-R free-space-tree) 00:08:25.851 00:08:25.851 Label: (null) 00:08:25.851 UUID: b6a8a48a-662c-4ffa-bbde-5ae609560fb8 00:08:25.851 Node size: 16384 00:08:25.851 Sector size: 4096 00:08:25.851 Filesystem size: 510.00MiB 00:08:25.851 Block group profiles: 00:08:25.851 Data: single 8.00MiB 00:08:25.851 Metadata: DUP 32.00MiB 00:08:25.851 System: DUP 8.00MiB 00:08:25.851 SSD detected: yes 00:08:25.851 Zoned device: no 00:08:25.851 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.852 Runtime features: free-space-tree 00:08:25.852 Checksum: crc32c 00:08:25.852 Number of devices: 1 00:08:25.852 Devices: 00:08:25.852 ID SIZE PATH 00:08:25.852 1 510.00MiB /dev/nvme0n1p1 00:08:25.852 00:08:25.852 01:29:38 -- common/autotest_common.sh@921 -- # return 0 00:08:25.852 01:29:38 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:26.792 01:29:39 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:26.792 01:29:39 -- target/filesystem.sh@25 -- # sync 00:08:26.792 01:29:39 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:26.792 01:29:39 -- target/filesystem.sh@27 -- # sync 00:08:26.792 01:29:39 -- target/filesystem.sh@29 -- # i=0 00:08:26.792 01:29:39 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:26.792 01:29:39 -- target/filesystem.sh@37 -- # kill -0 3678113 00:08:26.792 01:29:39 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:26.792 01:29:39 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:26.792 01:29:39 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:26.792 01:29:39 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:26.792 00:08:26.792 real 0m1.114s 00:08:26.792 user 0m0.023s 00:08:26.792 sys 0m0.108s 00:08:26.792 01:29:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:26.792 01:29:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.792 ************************************ 00:08:26.792 END TEST filesystem_btrfs 00:08:26.792 ************************************ 00:08:26.792 01:29:39 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:26.792 01:29:39 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:26.792 01:29:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:26.792 01:29:39 -- common/autotest_common.sh@10 -- # set +x 00:08:26.792 ************************************ 00:08:26.792 START TEST filesystem_xfs 00:08:26.792 ************************************ 00:08:26.792 01:29:39 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:26.792 01:29:39 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:26.792 01:29:39 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:26.792 01:29:39 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:26.792 01:29:39 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:26.792 01:29:39 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:26.792 01:29:39 -- common/autotest_common.sh@904 -- # local i=0 00:08:26.792 01:29:39 -- common/autotest_common.sh@905 -- # local force 00:08:26.792 01:29:39 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:26.792 01:29:39 -- common/autotest_common.sh@910 -- # force=-f 00:08:26.792 01:29:39 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:27.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:27.052 = sectsz=512 attr=2, projid32bit=1 00:08:27.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:27.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:27.052 data = bsize=4096 blocks=130560, imaxpct=25 00:08:27.052 = sunit=0 swidth=0 blks 00:08:27.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:27.052 log =internal log bsize=4096 blocks=16384, version=2 00:08:27.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:27.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:27.991 Discarding blocks...Done. 00:08:27.991 01:29:40 -- common/autotest_common.sh@921 -- # return 0 00:08:27.991 01:29:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:30.522 01:29:43 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:30.522 01:29:43 -- target/filesystem.sh@25 -- # sync 00:08:30.522 01:29:43 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:30.522 01:29:43 -- target/filesystem.sh@27 -- # sync 00:08:30.522 01:29:43 -- target/filesystem.sh@29 -- # i=0 00:08:30.522 01:29:43 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:30.522 01:29:43 -- target/filesystem.sh@37 -- # kill -0 3678113 00:08:30.522 01:29:43 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:30.522 01:29:43 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:30.522 01:29:43 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:30.522 01:29:43 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:30.522 00:08:30.523 real 0m3.394s 00:08:30.523 user 0m0.016s 00:08:30.523 sys 0m0.058s 00:08:30.523 01:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.523 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:30.523 ************************************ 00:08:30.523 END TEST filesystem_xfs 00:08:30.523 ************************************ 00:08:30.523 01:29:43 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:30.523 01:29:43 -- target/filesystem.sh@93 -- # sync 00:08:30.523 01:29:43 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:30.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:30.523 01:29:43 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:30.523 01:29:43 -- common/autotest_common.sh@1198 -- # local i=0 00:08:30.523 01:29:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:30.523 01:29:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.523 01:29:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:30.523 01:29:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:30.523 01:29:43 -- common/autotest_common.sh@1210 -- # return 0 00:08:30.523 01:29:43 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:30.523 01:29:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:30.523 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:30.523 01:29:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:30.523 01:29:43 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:30.523 01:29:43 -- target/filesystem.sh@101 -- # killprocess 3678113 00:08:30.523 01:29:43 -- common/autotest_common.sh@926 -- # '[' -z 3678113 ']' 00:08:30.523 01:29:43 -- common/autotest_common.sh@930 -- # kill -0 3678113 00:08:30.523 01:29:43 -- common/autotest_common.sh@931 -- # uname 00:08:30.523 01:29:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:30.523 01:29:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3678113 00:08:30.523 01:29:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:30.523 01:29:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:30.523 01:29:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3678113' 00:08:30.523 killing process with pid 3678113 00:08:30.523 01:29:43 -- common/autotest_common.sh@945 -- # kill 3678113 00:08:30.523 01:29:43 -- common/autotest_common.sh@950 -- # wait 3678113 00:08:31.091 01:29:43 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:31.091 00:08:31.091 real 0m13.133s 00:08:31.091 user 0m50.671s 00:08:31.091 sys 0m1.836s 00:08:31.091 01:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.091 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:31.091 ************************************ 00:08:31.091 END TEST nvmf_filesystem_no_in_capsule 00:08:31.091 ************************************ 00:08:31.091 01:29:43 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:31.091 01:29:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:31.091 01:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:31.091 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:31.091 ************************************ 00:08:31.091 START TEST nvmf_filesystem_in_capsule 00:08:31.091 ************************************ 00:08:31.091 01:29:43 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:31.091 01:29:43 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:31.091 01:29:43 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:31.091 01:29:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:31.091 01:29:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:31.091 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:31.091 01:29:43 -- nvmf/common.sh@469 -- # nvmfpid=3679968 00:08:31.091 01:29:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:31.091 01:29:43 -- nvmf/common.sh@470 -- # waitforlisten 3679968 00:08:31.091 01:29:43 -- common/autotest_common.sh@819 -- # '[' -z 3679968 ']' 00:08:31.091 01:29:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.091 01:29:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:31.091 01:29:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.091 01:29:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:31.091 01:29:43 -- common/autotest_common.sh@10 -- # set +x 00:08:31.091 [2024-07-23 01:29:43.966301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:31.091 [2024-07-23 01:29:43.966389] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.091 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.091 [2024-07-23 01:29:44.035120] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.091 [2024-07-23 01:29:44.125203] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:31.091 [2024-07-23 01:29:44.125368] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.091 [2024-07-23 01:29:44.125388] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.091 [2024-07-23 01:29:44.125403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.091 [2024-07-23 01:29:44.125487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.091 [2024-07-23 01:29:44.125540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.091 [2024-07-23 01:29:44.125593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.091 [2024-07-23 01:29:44.125596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.030 01:29:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:32.030 01:29:44 -- common/autotest_common.sh@852 -- # return 0 00:08:32.030 01:29:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:32.030 01:29:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:32.030 01:29:44 -- common/autotest_common.sh@10 -- # set +x 00:08:32.030 01:29:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.030 01:29:44 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:32.030 01:29:44 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:32.030 01:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.030 01:29:44 -- common/autotest_common.sh@10 -- # set +x 00:08:32.030 [2024-07-23 01:29:44.955337] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.030 01:29:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.030 01:29:44 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:32.030 01:29:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.030 01:29:44 -- common/autotest_common.sh@10 -- # set +x 00:08:32.030 Malloc1 00:08:32.030 01:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.030 01:29:45 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:32.030 01:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.030 01:29:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.030 01:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.030 01:29:45 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:32.030 01:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.030 01:29:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.290 01:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.290 01:29:45 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:32.290 01:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.290 01:29:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.290 [2024-07-23 01:29:45.140097] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:32.290 01:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.290 01:29:45 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:32.290 01:29:45 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:32.290 01:29:45 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:32.290 01:29:45 -- common/autotest_common.sh@1359 -- # local bs 00:08:32.290 01:29:45 -- common/autotest_common.sh@1360 -- # local nb 00:08:32.290 01:29:45 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:32.290 01:29:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:32.290 01:29:45 -- common/autotest_common.sh@10 -- # set +x 00:08:32.290 01:29:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:32.290 01:29:45 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:32.290 { 00:08:32.290 "name": "Malloc1", 00:08:32.290 "aliases": [ 00:08:32.290 "92951f8c-199b-4ef9-a159-dc76067900fe" 00:08:32.290 ], 00:08:32.290 "product_name": "Malloc disk", 00:08:32.290 "block_size": 512, 00:08:32.290 "num_blocks": 1048576, 00:08:32.290 "uuid": "92951f8c-199b-4ef9-a159-dc76067900fe", 00:08:32.290 "assigned_rate_limits": { 00:08:32.290 "rw_ios_per_sec": 0, 00:08:32.290 "rw_mbytes_per_sec": 0, 00:08:32.290 "r_mbytes_per_sec": 0, 00:08:32.290 "w_mbytes_per_sec": 0 00:08:32.290 }, 00:08:32.290 "claimed": true, 00:08:32.290 "claim_type": "exclusive_write", 00:08:32.290 "zoned": false, 00:08:32.290 "supported_io_types": { 00:08:32.290 "read": true, 00:08:32.290 "write": true, 00:08:32.290 "unmap": true, 00:08:32.290 "write_zeroes": true, 00:08:32.290 "flush": true, 00:08:32.290 "reset": true, 00:08:32.290 "compare": false, 00:08:32.290 "compare_and_write": false, 00:08:32.290 "abort": true, 00:08:32.290 "nvme_admin": false, 00:08:32.290 "nvme_io": false 00:08:32.290 }, 00:08:32.290 "memory_domains": [ 00:08:32.290 { 00:08:32.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.290 "dma_device_type": 2 00:08:32.290 } 00:08:32.290 ], 00:08:32.290 "driver_specific": {} 00:08:32.290 } 00:08:32.290 ]' 00:08:32.290 01:29:45 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:32.290 01:29:45 -- common/autotest_common.sh@1362 -- # bs=512 00:08:32.290 01:29:45 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:32.290 01:29:45 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:32.290 01:29:45 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:32.290 01:29:45 -- common/autotest_common.sh@1367 -- # echo 512 00:08:32.290 01:29:45 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:32.290 01:29:45 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:32.859 01:29:45 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:32.859 01:29:45 -- common/autotest_common.sh@1177 -- # local i=0 00:08:32.859 01:29:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:32.859 01:29:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:32.859 01:29:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:34.766 01:29:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:34.767 01:29:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:34.767 01:29:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:34.767 01:29:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:34.767 01:29:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:34.767 01:29:47 -- common/autotest_common.sh@1187 -- # return 0 00:08:34.767 01:29:47 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:34.767 01:29:47 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:35.025 01:29:47 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:35.025 01:29:47 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:35.025 01:29:47 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:35.025 01:29:47 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:35.025 01:29:47 -- setup/common.sh@80 -- # echo 536870912 00:08:35.025 01:29:47 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:35.025 01:29:47 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:35.025 01:29:47 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:35.025 01:29:47 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:35.025 01:29:48 -- target/filesystem.sh@69 -- # partprobe 00:08:35.964 01:29:48 -- target/filesystem.sh@70 -- # sleep 1 00:08:36.902 01:29:49 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:36.902 01:29:49 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:36.902 01:29:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:36.902 01:29:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:36.902 01:29:49 -- common/autotest_common.sh@10 -- # set +x 00:08:36.902 ************************************ 00:08:36.902 START TEST filesystem_in_capsule_ext4 00:08:36.902 ************************************ 00:08:36.902 01:29:49 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:36.902 01:29:49 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:36.902 01:29:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:36.902 01:29:49 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:36.902 01:29:49 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:36.902 01:29:49 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:36.902 01:29:49 -- common/autotest_common.sh@904 -- # local i=0 00:08:36.902 01:29:49 -- common/autotest_common.sh@905 -- # local force 00:08:36.902 01:29:49 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:36.902 01:29:49 -- common/autotest_common.sh@908 -- # force=-F 00:08:36.902 01:29:49 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:36.902 mke2fs 1.46.5 (30-Dec-2021) 00:08:36.902 Discarding device blocks: 0/522240 done 00:08:36.902 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:36.902 Filesystem UUID: a02b3d15-cd2e-468f-bb90-de72d773a5e2 00:08:36.902 Superblock backups stored on blocks: 00:08:36.902 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:36.902 00:08:36.902 Allocating group tables: 0/64 done 00:08:36.902 Writing inode tables: 0/64 done 00:08:37.161 Creating journal (8192 blocks): done 00:08:37.161 Writing superblocks and filesystem accounting information: 0/64 done 00:08:37.161 00:08:37.161 01:29:50 -- common/autotest_common.sh@921 -- # return 0 00:08:37.161 01:29:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:37.161 01:29:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:37.161 01:29:50 -- target/filesystem.sh@25 -- # sync 00:08:37.161 01:29:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:37.161 01:29:50 -- target/filesystem.sh@27 -- # sync 00:08:37.161 01:29:50 -- target/filesystem.sh@29 -- # i=0 00:08:37.161 01:29:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:37.421 01:29:50 -- target/filesystem.sh@37 -- # kill -0 3679968 00:08:37.421 01:29:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:37.421 01:29:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:37.421 01:29:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:37.421 01:29:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:37.421 00:08:37.421 real 0m0.449s 00:08:37.421 user 0m0.011s 00:08:37.421 sys 0m0.055s 00:08:37.421 01:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.421 01:29:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.421 ************************************ 00:08:37.421 END TEST filesystem_in_capsule_ext4 00:08:37.421 ************************************ 00:08:37.421 01:29:50 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:37.421 01:29:50 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:37.421 01:29:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.421 01:29:50 -- common/autotest_common.sh@10 -- # set +x 00:08:37.421 ************************************ 00:08:37.421 START TEST filesystem_in_capsule_btrfs 00:08:37.421 ************************************ 00:08:37.421 01:29:50 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:37.421 01:29:50 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:37.421 01:29:50 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:37.421 01:29:50 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:37.421 01:29:50 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:37.421 01:29:50 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:37.421 01:29:50 -- common/autotest_common.sh@904 -- # local i=0 00:08:37.421 01:29:50 -- common/autotest_common.sh@905 -- # local force 00:08:37.421 01:29:50 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:37.421 01:29:50 -- common/autotest_common.sh@910 -- # force=-f 00:08:37.421 01:29:50 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:37.421 btrfs-progs v6.6.2 00:08:37.421 See https://btrfs.readthedocs.io for more information. 00:08:37.421 00:08:37.421 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:37.421 NOTE: several default settings have changed in version 5.15, please make sure 00:08:37.421 this does not affect your deployments: 00:08:37.421 - DUP for metadata (-m dup) 00:08:37.421 - enabled no-holes (-O no-holes) 00:08:37.421 - enabled free-space-tree (-R free-space-tree) 00:08:37.421 00:08:37.421 Label: (null) 00:08:37.421 UUID: 251b6ff9-d670-44ea-9149-84cdad4faf2b 00:08:37.421 Node size: 16384 00:08:37.421 Sector size: 4096 00:08:37.421 Filesystem size: 510.00MiB 00:08:37.421 Block group profiles: 00:08:37.421 Data: single 8.00MiB 00:08:37.421 Metadata: DUP 32.00MiB 00:08:37.421 System: DUP 8.00MiB 00:08:37.421 SSD detected: yes 00:08:37.421 Zoned device: no 00:08:37.421 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:37.421 Runtime features: free-space-tree 00:08:37.421 Checksum: crc32c 00:08:37.421 Number of devices: 1 00:08:37.421 Devices: 00:08:37.421 ID SIZE PATH 00:08:37.421 1 510.00MiB /dev/nvme0n1p1 00:08:37.421 00:08:37.421 01:29:50 -- common/autotest_common.sh@921 -- # return 0 00:08:37.421 01:29:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:38.358 01:29:51 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:38.358 01:29:51 -- target/filesystem.sh@25 -- # sync 00:08:38.358 01:29:51 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:38.359 01:29:51 -- target/filesystem.sh@27 -- # sync 00:08:38.359 01:29:51 -- target/filesystem.sh@29 -- # i=0 00:08:38.359 01:29:51 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:38.359 01:29:51 -- target/filesystem.sh@37 -- # kill -0 3679968 00:08:38.359 01:29:51 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:38.359 01:29:51 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:38.359 01:29:51 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:38.359 01:29:51 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:38.359 00:08:38.359 real 0m0.999s 00:08:38.359 user 0m0.016s 00:08:38.359 sys 0m0.117s 00:08:38.359 01:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.359 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:08:38.359 ************************************ 00:08:38.359 END TEST filesystem_in_capsule_btrfs 00:08:38.359 ************************************ 00:08:38.359 01:29:51 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:38.359 01:29:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:38.359 01:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.359 01:29:51 -- common/autotest_common.sh@10 -- # set +x 00:08:38.359 ************************************ 00:08:38.359 START TEST filesystem_in_capsule_xfs 00:08:38.359 ************************************ 00:08:38.359 01:29:51 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:38.359 01:29:51 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:38.359 01:29:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:38.359 01:29:51 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:38.359 01:29:51 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:38.359 01:29:51 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:38.359 01:29:51 -- common/autotest_common.sh@904 -- # local i=0 00:08:38.359 01:29:51 -- common/autotest_common.sh@905 -- # local force 00:08:38.359 01:29:51 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:38.359 01:29:51 -- common/autotest_common.sh@910 -- # force=-f 00:08:38.359 01:29:51 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:38.359 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:38.359 = sectsz=512 attr=2, projid32bit=1 00:08:38.359 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:38.359 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:38.359 data = bsize=4096 blocks=130560, imaxpct=25 00:08:38.359 = sunit=0 swidth=0 blks 00:08:38.359 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:38.359 log =internal log bsize=4096 blocks=16384, version=2 00:08:38.359 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:38.359 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:39.741 Discarding blocks...Done. 00:08:39.741 01:29:52 -- common/autotest_common.sh@921 -- # return 0 00:08:39.741 01:29:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.656 01:29:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.656 01:29:54 -- target/filesystem.sh@25 -- # sync 00:08:41.656 01:29:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.656 01:29:54 -- target/filesystem.sh@27 -- # sync 00:08:41.656 01:29:54 -- target/filesystem.sh@29 -- # i=0 00:08:41.656 01:29:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.656 01:29:54 -- target/filesystem.sh@37 -- # kill -0 3679968 00:08:41.656 01:29:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.656 01:29:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.656 01:29:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.656 01:29:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.656 00:08:41.656 real 0m3.137s 00:08:41.656 user 0m0.012s 00:08:41.656 sys 0m0.067s 00:08:41.656 01:29:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.656 01:29:54 -- common/autotest_common.sh@10 -- # set +x 00:08:41.656 ************************************ 00:08:41.656 END TEST filesystem_in_capsule_xfs 00:08:41.656 ************************************ 00:08:41.656 01:29:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:41.914 01:29:54 -- target/filesystem.sh@93 -- # sync 00:08:41.914 01:29:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:41.914 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.914 01:29:54 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:41.914 01:29:54 -- common/autotest_common.sh@1198 -- # local i=0 00:08:41.914 01:29:54 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:41.914 01:29:54 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.914 01:29:54 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:41.914 01:29:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:41.914 01:29:54 -- common/autotest_common.sh@1210 -- # return 0 00:08:41.914 01:29:54 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:41.914 01:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.914 01:29:54 -- common/autotest_common.sh@10 -- # set +x 00:08:41.914 01:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.915 01:29:54 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:41.915 01:29:54 -- target/filesystem.sh@101 -- # killprocess 3679968 00:08:41.915 01:29:54 -- common/autotest_common.sh@926 -- # '[' -z 3679968 ']' 00:08:41.915 01:29:54 -- common/autotest_common.sh@930 -- # kill -0 3679968 00:08:41.915 01:29:54 -- common/autotest_common.sh@931 -- # uname 00:08:41.915 01:29:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:41.915 01:29:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3679968 00:08:41.915 01:29:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:41.915 01:29:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:41.915 01:29:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3679968' 00:08:41.915 killing process with pid 3679968 00:08:41.915 01:29:54 -- common/autotest_common.sh@945 -- # kill 3679968 00:08:41.915 01:29:54 -- common/autotest_common.sh@950 -- # wait 3679968 00:08:42.483 01:29:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:42.483 00:08:42.483 real 0m11.414s 00:08:42.483 user 0m43.985s 00:08:42.483 sys 0m1.743s 00:08:42.483 01:29:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.483 01:29:55 -- common/autotest_common.sh@10 -- # set +x 00:08:42.483 ************************************ 00:08:42.483 END TEST nvmf_filesystem_in_capsule 00:08:42.483 ************************************ 00:08:42.483 01:29:55 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:42.483 01:29:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:42.483 01:29:55 -- nvmf/common.sh@116 -- # sync 00:08:42.483 01:29:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:42.483 01:29:55 -- nvmf/common.sh@119 -- # set +e 00:08:42.483 01:29:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:42.483 01:29:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:42.483 rmmod nvme_tcp 00:08:42.483 rmmod nvme_fabrics 00:08:42.483 rmmod nvme_keyring 00:08:42.483 01:29:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:42.483 01:29:55 -- nvmf/common.sh@123 -- # set -e 00:08:42.483 01:29:55 -- nvmf/common.sh@124 -- # return 0 00:08:42.483 01:29:55 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:42.483 01:29:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:42.483 01:29:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:42.483 01:29:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:42.483 01:29:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:42.483 01:29:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:42.483 01:29:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.483 01:29:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.483 01:29:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.422 01:29:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:44.422 00:08:44.422 real 0m29.026s 00:08:44.422 user 1m35.530s 00:08:44.422 sys 0m5.203s 00:08:44.422 01:29:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.422 01:29:57 -- common/autotest_common.sh@10 -- # set +x 00:08:44.422 ************************************ 00:08:44.422 END TEST nvmf_filesystem 00:08:44.422 ************************************ 00:08:44.422 01:29:57 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.422 01:29:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.422 01:29:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.422 01:29:57 -- common/autotest_common.sh@10 -- # set +x 00:08:44.422 ************************************ 00:08:44.422 START TEST nvmf_discovery 00:08:44.422 ************************************ 00:08:44.422 01:29:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:44.422 * Looking for test storage... 00:08:44.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.422 01:29:57 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.422 01:29:57 -- nvmf/common.sh@7 -- # uname -s 00:08:44.422 01:29:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.422 01:29:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.422 01:29:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.422 01:29:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.422 01:29:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.422 01:29:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.422 01:29:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.422 01:29:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.422 01:29:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.422 01:29:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.680 01:29:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.680 01:29:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.680 01:29:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.680 01:29:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.680 01:29:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.680 01:29:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.680 01:29:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.680 01:29:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.680 01:29:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.680 01:29:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.680 01:29:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.680 01:29:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.680 01:29:57 -- paths/export.sh@5 -- # export PATH 00:08:44.680 01:29:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.680 01:29:57 -- nvmf/common.sh@46 -- # : 0 00:08:44.680 01:29:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.680 01:29:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.680 01:29:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.680 01:29:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.680 01:29:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.680 01:29:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.680 01:29:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.680 01:29:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.680 01:29:57 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:44.680 01:29:57 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:44.680 01:29:57 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:44.680 01:29:57 -- target/discovery.sh@15 -- # hash nvme 00:08:44.680 01:29:57 -- target/discovery.sh@20 -- # nvmftestinit 00:08:44.680 01:29:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.680 01:29:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.680 01:29:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.680 01:29:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.680 01:29:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.680 01:29:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.680 01:29:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.680 01:29:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.680 01:29:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:44.680 01:29:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:44.680 01:29:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:44.680 01:29:57 -- common/autotest_common.sh@10 -- # set +x 00:08:46.583 01:29:59 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:46.583 01:29:59 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:46.583 01:29:59 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:46.583 01:29:59 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:46.583 01:29:59 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:46.583 01:29:59 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:46.583 01:29:59 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:46.583 01:29:59 -- nvmf/common.sh@294 -- # net_devs=() 00:08:46.583 01:29:59 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:46.583 01:29:59 -- nvmf/common.sh@295 -- # e810=() 00:08:46.583 01:29:59 -- nvmf/common.sh@295 -- # local -ga e810 00:08:46.583 01:29:59 -- nvmf/common.sh@296 -- # x722=() 00:08:46.583 01:29:59 -- nvmf/common.sh@296 -- # local -ga x722 00:08:46.583 01:29:59 -- nvmf/common.sh@297 -- # mlx=() 00:08:46.583 01:29:59 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:46.583 01:29:59 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.583 01:29:59 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.583 01:29:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:46.583 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:46.583 01:29:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:46.583 01:29:59 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:46.583 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:46.583 01:29:59 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.583 01:29:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.583 01:29:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.583 01:29:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:46.583 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:46.583 01:29:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:46.583 01:29:59 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.583 01:29:59 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.583 01:29:59 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:46.583 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:46.583 01:29:59 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:46.583 01:29:59 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:46.583 01:29:59 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:46.583 01:29:59 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.583 01:29:59 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:46.583 01:29:59 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:46.583 01:29:59 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:46.583 01:29:59 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:46.583 01:29:59 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:46.583 01:29:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.583 01:29:59 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:46.583 01:29:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:46.583 01:29:59 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:46.583 01:29:59 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:46.583 01:29:59 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:46.583 01:29:59 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:46.583 01:29:59 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:46.583 01:29:59 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:46.583 01:29:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:46.583 01:29:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:46.583 01:29:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:46.583 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.583 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:08:46.583 00:08:46.583 --- 10.0.0.2 ping statistics --- 00:08:46.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.583 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:46.583 01:29:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:46.583 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.583 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:08:46.583 00:08:46.583 --- 10.0.0.1 ping statistics --- 00:08:46.583 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.583 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:46.583 01:29:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.583 01:29:59 -- nvmf/common.sh@410 -- # return 0 00:08:46.583 01:29:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:46.583 01:29:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.583 01:29:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:46.583 01:29:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.583 01:29:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:46.583 01:29:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:46.840 01:29:59 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:46.841 01:29:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:46.841 01:29:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.841 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.841 01:29:59 -- nvmf/common.sh@469 -- # nvmfpid=3683489 00:08:46.841 01:29:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:46.841 01:29:59 -- nvmf/common.sh@470 -- # waitforlisten 3683489 00:08:46.841 01:29:59 -- common/autotest_common.sh@819 -- # '[' -z 3683489 ']' 00:08:46.841 01:29:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.841 01:29:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.841 01:29:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.841 01:29:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.841 01:29:59 -- common/autotest_common.sh@10 -- # set +x 00:08:46.841 [2024-07-23 01:29:59.733938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:46.841 [2024-07-23 01:29:59.734024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.841 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.841 [2024-07-23 01:29:59.804696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:46.841 [2024-07-23 01:29:59.897274] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:46.841 [2024-07-23 01:29:59.897467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.841 [2024-07-23 01:29:59.897487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.841 [2024-07-23 01:29:59.897502] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.841 [2024-07-23 01:29:59.897599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.841 [2024-07-23 01:29:59.897654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.841 [2024-07-23 01:29:59.897683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:46.841 [2024-07-23 01:29:59.897686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.774 01:30:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:47.774 01:30:00 -- common/autotest_common.sh@852 -- # return 0 00:08:47.774 01:30:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:47.774 01:30:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:47.774 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:47.775 01:30:00 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 [2024-07-23 01:30:00.703340] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@26 -- # seq 1 4 00:08:47.775 01:30:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.775 01:30:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 Null1 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 [2024-07-23 01:30:00.743607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.775 01:30:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 Null2 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.775 01:30:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 Null3 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:47.775 01:30:00 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 Null4 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:47.775 01:30:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.775 01:30:00 -- common/autotest_common.sh@10 -- # set +x 00:08:47.775 01:30:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.775 01:30:00 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:48.033 00:08:48.033 Discovery Log Number of Records 6, Generation counter 6 00:08:48.033 =====Discovery Log Entry 0====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: current discovery subsystem 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4420 00:08:48.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: explicit discovery connections, duplicate discovery information 00:08:48.033 sectype: none 00:08:48.033 =====Discovery Log Entry 1====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: nvme subsystem 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4420 00:08:48.033 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: none 00:08:48.033 sectype: none 00:08:48.033 =====Discovery Log Entry 2====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: nvme subsystem 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4420 00:08:48.033 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: none 00:08:48.033 sectype: none 00:08:48.033 =====Discovery Log Entry 3====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: nvme subsystem 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4420 00:08:48.033 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: none 00:08:48.033 sectype: none 00:08:48.033 =====Discovery Log Entry 4====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: nvme subsystem 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4420 00:08:48.033 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: none 00:08:48.033 sectype: none 00:08:48.033 =====Discovery Log Entry 5====== 00:08:48.033 trtype: tcp 00:08:48.033 adrfam: ipv4 00:08:48.033 subtype: discovery subsystem referral 00:08:48.033 treq: not required 00:08:48.033 portid: 0 00:08:48.033 trsvcid: 4430 00:08:48.033 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:48.033 traddr: 10.0.0.2 00:08:48.033 eflags: none 00:08:48.033 sectype: none 00:08:48.033 01:30:01 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:48.033 Perform nvmf subsystem discovery via RPC 00:08:48.033 01:30:01 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 [2024-07-23 01:30:01.032381] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:48.033 [ 00:08:48.033 { 00:08:48.033 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:48.033 "subtype": "Discovery", 00:08:48.033 "listen_addresses": [ 00:08:48.033 { 00:08:48.033 "transport": "TCP", 00:08:48.033 "trtype": "TCP", 00:08:48.033 "adrfam": "IPv4", 00:08:48.033 "traddr": "10.0.0.2", 00:08:48.033 "trsvcid": "4420" 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "allow_any_host": true, 00:08:48.033 "hosts": [] 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:48.033 "subtype": "NVMe", 00:08:48.033 "listen_addresses": [ 00:08:48.033 { 00:08:48.033 "transport": "TCP", 00:08:48.033 "trtype": "TCP", 00:08:48.033 "adrfam": "IPv4", 00:08:48.033 "traddr": "10.0.0.2", 00:08:48.033 "trsvcid": "4420" 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "allow_any_host": true, 00:08:48.033 "hosts": [], 00:08:48.033 "serial_number": "SPDK00000000000001", 00:08:48.033 "model_number": "SPDK bdev Controller", 00:08:48.033 "max_namespaces": 32, 00:08:48.033 "min_cntlid": 1, 00:08:48.033 "max_cntlid": 65519, 00:08:48.033 "namespaces": [ 00:08:48.033 { 00:08:48.033 "nsid": 1, 00:08:48.033 "bdev_name": "Null1", 00:08:48.033 "name": "Null1", 00:08:48.033 "nguid": "F88ED52435AB4F828FCB5E58BEB8FA08", 00:08:48.033 "uuid": "f88ed524-35ab-4f82-8fcb-5e58beb8fa08" 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:48.033 "subtype": "NVMe", 00:08:48.033 "listen_addresses": [ 00:08:48.033 { 00:08:48.033 "transport": "TCP", 00:08:48.033 "trtype": "TCP", 00:08:48.033 "adrfam": "IPv4", 00:08:48.033 "traddr": "10.0.0.2", 00:08:48.033 "trsvcid": "4420" 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "allow_any_host": true, 00:08:48.033 "hosts": [], 00:08:48.033 "serial_number": "SPDK00000000000002", 00:08:48.033 "model_number": "SPDK bdev Controller", 00:08:48.033 "max_namespaces": 32, 00:08:48.033 "min_cntlid": 1, 00:08:48.033 "max_cntlid": 65519, 00:08:48.033 "namespaces": [ 00:08:48.033 { 00:08:48.033 "nsid": 1, 00:08:48.033 "bdev_name": "Null2", 00:08:48.033 "name": "Null2", 00:08:48.033 "nguid": "1697F66F35F34D1B879E8251BAC2FD3C", 00:08:48.033 "uuid": "1697f66f-35f3-4d1b-879e-8251bac2fd3c" 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:48.033 "subtype": "NVMe", 00:08:48.033 "listen_addresses": [ 00:08:48.033 { 00:08:48.033 "transport": "TCP", 00:08:48.033 "trtype": "TCP", 00:08:48.033 "adrfam": "IPv4", 00:08:48.033 "traddr": "10.0.0.2", 00:08:48.033 "trsvcid": "4420" 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "allow_any_host": true, 00:08:48.033 "hosts": [], 00:08:48.033 "serial_number": "SPDK00000000000003", 00:08:48.033 "model_number": "SPDK bdev Controller", 00:08:48.033 "max_namespaces": 32, 00:08:48.033 "min_cntlid": 1, 00:08:48.033 "max_cntlid": 65519, 00:08:48.033 "namespaces": [ 00:08:48.033 { 00:08:48.033 "nsid": 1, 00:08:48.033 "bdev_name": "Null3", 00:08:48.033 "name": "Null3", 00:08:48.033 "nguid": "7118495EC1384DC1ADB16ADB39D31473", 00:08:48.033 "uuid": "7118495e-c138-4dc1-adb1-6adb39d31473" 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 }, 00:08:48.033 { 00:08:48.033 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:48.033 "subtype": "NVMe", 00:08:48.033 "listen_addresses": [ 00:08:48.033 { 00:08:48.033 "transport": "TCP", 00:08:48.033 "trtype": "TCP", 00:08:48.033 "adrfam": "IPv4", 00:08:48.033 "traddr": "10.0.0.2", 00:08:48.033 "trsvcid": "4420" 00:08:48.033 } 00:08:48.033 ], 00:08:48.033 "allow_any_host": true, 00:08:48.033 "hosts": [], 00:08:48.033 "serial_number": "SPDK00000000000004", 00:08:48.033 "model_number": "SPDK bdev Controller", 00:08:48.033 "max_namespaces": 32, 00:08:48.033 "min_cntlid": 1, 00:08:48.033 "max_cntlid": 65519, 00:08:48.033 "namespaces": [ 00:08:48.033 { 00:08:48.033 "nsid": 1, 00:08:48.033 "bdev_name": "Null4", 00:08:48.033 "name": "Null4", 00:08:48.033 "nguid": "41FAEAB5482B42D3B6420FDD7EA4A38F", 00:08:48.033 "uuid": "41faeab5-482b-42d3-b642-0fdd7ea4a38f" 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 } 00:08:48.033 ] 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@42 -- # seq 1 4 00:08:48.033 01:30:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.033 01:30:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.033 01:30:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.033 01:30:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:48.033 01:30:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.033 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.033 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.033 01:30:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:48.033 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.034 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.034 01:30:01 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:48.034 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.034 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.034 01:30:01 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:48.034 01:30:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:48.034 01:30:01 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:48.034 01:30:01 -- common/autotest_common.sh@10 -- # set +x 00:08:48.034 01:30:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:48.292 01:30:01 -- target/discovery.sh@49 -- # check_bdevs= 00:08:48.292 01:30:01 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:48.292 01:30:01 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:48.292 01:30:01 -- target/discovery.sh@57 -- # nvmftestfini 00:08:48.292 01:30:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:48.292 01:30:01 -- nvmf/common.sh@116 -- # sync 00:08:48.292 01:30:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:48.292 01:30:01 -- nvmf/common.sh@119 -- # set +e 00:08:48.292 01:30:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:48.292 01:30:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:48.292 rmmod nvme_tcp 00:08:48.292 rmmod nvme_fabrics 00:08:48.292 rmmod nvme_keyring 00:08:48.292 01:30:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:48.292 01:30:01 -- nvmf/common.sh@123 -- # set -e 00:08:48.292 01:30:01 -- nvmf/common.sh@124 -- # return 0 00:08:48.292 01:30:01 -- nvmf/common.sh@477 -- # '[' -n 3683489 ']' 00:08:48.292 01:30:01 -- nvmf/common.sh@478 -- # killprocess 3683489 00:08:48.292 01:30:01 -- common/autotest_common.sh@926 -- # '[' -z 3683489 ']' 00:08:48.292 01:30:01 -- common/autotest_common.sh@930 -- # kill -0 3683489 00:08:48.292 01:30:01 -- common/autotest_common.sh@931 -- # uname 00:08:48.292 01:30:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:48.292 01:30:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3683489 00:08:48.292 01:30:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:48.292 01:30:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:48.292 01:30:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3683489' 00:08:48.292 killing process with pid 3683489 00:08:48.292 01:30:01 -- common/autotest_common.sh@945 -- # kill 3683489 00:08:48.292 [2024-07-23 01:30:01.241138] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:48.292 01:30:01 -- common/autotest_common.sh@950 -- # wait 3683489 00:08:48.549 01:30:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:48.549 01:30:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:48.549 01:30:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:48.549 01:30:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.549 01:30:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:48.549 01:30:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.549 01:30:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.549 01:30:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.467 01:30:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:50.467 00:08:50.467 real 0m6.036s 00:08:50.467 user 0m7.157s 00:08:50.467 sys 0m1.870s 00:08:50.467 01:30:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.467 01:30:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.467 ************************************ 00:08:50.467 END TEST nvmf_discovery 00:08:50.467 ************************************ 00:08:50.467 01:30:03 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.467 01:30:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.467 01:30:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.467 01:30:03 -- common/autotest_common.sh@10 -- # set +x 00:08:50.468 ************************************ 00:08:50.468 START TEST nvmf_referrals 00:08:50.468 ************************************ 00:08:50.468 01:30:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:50.726 * Looking for test storage... 00:08:50.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.726 01:30:03 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.726 01:30:03 -- nvmf/common.sh@7 -- # uname -s 00:08:50.726 01:30:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.726 01:30:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.726 01:30:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.726 01:30:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.726 01:30:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.726 01:30:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.726 01:30:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.726 01:30:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.726 01:30:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.726 01:30:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.726 01:30:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.726 01:30:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.726 01:30:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.726 01:30:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.726 01:30:03 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.726 01:30:03 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.726 01:30:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.726 01:30:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.726 01:30:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.726 01:30:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.726 01:30:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.726 01:30:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.726 01:30:03 -- paths/export.sh@5 -- # export PATH 00:08:50.726 01:30:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.726 01:30:03 -- nvmf/common.sh@46 -- # : 0 00:08:50.726 01:30:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.726 01:30:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.726 01:30:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.726 01:30:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.726 01:30:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.726 01:30:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.726 01:30:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.726 01:30:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.726 01:30:03 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:50.726 01:30:03 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:50.726 01:30:03 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:50.726 01:30:03 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:50.726 01:30:03 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:50.726 01:30:03 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:50.726 01:30:03 -- target/referrals.sh@37 -- # nvmftestinit 00:08:50.726 01:30:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.726 01:30:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.726 01:30:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.726 01:30:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.726 01:30:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.726 01:30:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.726 01:30:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.726 01:30:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.726 01:30:03 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:50.726 01:30:03 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:50.726 01:30:03 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:50.726 01:30:03 -- common/autotest_common.sh@10 -- # set +x 00:08:52.634 01:30:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:52.634 01:30:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:52.634 01:30:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:52.634 01:30:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:52.634 01:30:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:52.634 01:30:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:52.634 01:30:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:52.634 01:30:05 -- nvmf/common.sh@294 -- # net_devs=() 00:08:52.634 01:30:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:52.634 01:30:05 -- nvmf/common.sh@295 -- # e810=() 00:08:52.634 01:30:05 -- nvmf/common.sh@295 -- # local -ga e810 00:08:52.634 01:30:05 -- nvmf/common.sh@296 -- # x722=() 00:08:52.634 01:30:05 -- nvmf/common.sh@296 -- # local -ga x722 00:08:52.634 01:30:05 -- nvmf/common.sh@297 -- # mlx=() 00:08:52.634 01:30:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:52.634 01:30:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.634 01:30:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:52.634 01:30:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:52.634 01:30:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:52.634 01:30:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.634 01:30:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.634 01:30:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:52.634 01:30:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.634 01:30:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:52.634 01:30:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:52.634 01:30:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.634 01:30:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.634 01:30:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.634 01:30:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.634 01:30:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.634 01:30:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.634 01:30:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:52.634 01:30:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.635 01:30:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:52.635 01:30:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.635 01:30:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.635 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.635 01:30:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.635 01:30:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:52.635 01:30:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:52.635 01:30:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:52.635 01:30:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:52.635 01:30:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:52.635 01:30:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.635 01:30:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.635 01:30:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.635 01:30:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:52.635 01:30:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.635 01:30:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.635 01:30:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:52.635 01:30:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.635 01:30:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.635 01:30:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:52.635 01:30:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:52.635 01:30:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.635 01:30:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.635 01:30:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.896 01:30:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.896 01:30:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:52.896 01:30:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.896 01:30:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.896 01:30:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.896 01:30:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:52.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:08:52.896 00:08:52.896 --- 10.0.0.2 ping statistics --- 00:08:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.896 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:52.896 01:30:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:08:52.896 00:08:52.896 --- 10.0.0.1 ping statistics --- 00:08:52.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.896 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:08:52.896 01:30:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.896 01:30:05 -- nvmf/common.sh@410 -- # return 0 00:08:52.896 01:30:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:52.896 01:30:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.896 01:30:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:52.896 01:30:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:52.896 01:30:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.896 01:30:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:52.896 01:30:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:52.896 01:30:05 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:52.896 01:30:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:52.896 01:30:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:52.896 01:30:05 -- common/autotest_common.sh@10 -- # set +x 00:08:52.896 01:30:05 -- nvmf/common.sh@469 -- # nvmfpid=3685737 00:08:52.896 01:30:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.896 01:30:05 -- nvmf/common.sh@470 -- # waitforlisten 3685737 00:08:52.896 01:30:05 -- common/autotest_common.sh@819 -- # '[' -z 3685737 ']' 00:08:52.896 01:30:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.896 01:30:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:52.896 01:30:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.896 01:30:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:52.896 01:30:05 -- common/autotest_common.sh@10 -- # set +x 00:08:52.896 [2024-07-23 01:30:05.868580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:52.896 [2024-07-23 01:30:05.868671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.896 EAL: No free 2048 kB hugepages reported on node 1 00:08:52.896 [2024-07-23 01:30:05.933184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.154 [2024-07-23 01:30:06.021268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:53.154 [2024-07-23 01:30:06.021424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.154 [2024-07-23 01:30:06.021441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.154 [2024-07-23 01:30:06.021454] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.154 [2024-07-23 01:30:06.021507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.154 [2024-07-23 01:30:06.021566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.155 [2024-07-23 01:30:06.021637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.155 [2024-07-23 01:30:06.021642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.093 01:30:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:54.093 01:30:06 -- common/autotest_common.sh@852 -- # return 0 00:08:54.093 01:30:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:54.093 01:30:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.093 01:30:06 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 [2024-07-23 01:30:06.880313] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 [2024-07-23 01:30:06.892476] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.093 01:30:06 -- target/referrals.sh@48 -- # jq length 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:54.093 01:30:06 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:54.093 01:30:06 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:54.093 01:30:06 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.093 01:30:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.093 01:30:06 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:54.093 01:30:06 -- common/autotest_common.sh@10 -- # set +x 00:08:54.093 01:30:06 -- target/referrals.sh@21 -- # sort 00:08:54.093 01:30:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.093 01:30:06 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:54.093 01:30:07 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:54.093 01:30:07 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:54.093 01:30:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.093 01:30:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.093 01:30:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.093 01:30:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.093 01:30:07 -- target/referrals.sh@26 -- # sort 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:54.352 01:30:07 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- target/referrals.sh@56 -- # jq length 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:54.352 01:30:07 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:54.352 01:30:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.352 01:30:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # sort 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # echo 00:08:54.352 01:30:07 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:54.352 01:30:07 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:54.352 01:30:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:54.352 01:30:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.352 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.352 01:30:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:54.352 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.352 01:30:07 -- target/referrals.sh@21 -- # sort 00:08:54.352 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:54.352 01:30:07 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:54.352 01:30:07 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:54.352 01:30:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.352 01:30:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.352 01:30:07 -- target/referrals.sh@26 -- # sort 00:08:54.611 01:30:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:54.611 01:30:07 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:54.611 01:30:07 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:54.611 01:30:07 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:54.611 01:30:07 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:54.611 01:30:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.611 01:30:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:54.611 01:30:07 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:54.611 01:30:07 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:54.611 01:30:07 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:54.611 01:30:07 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:54.611 01:30:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.611 01:30:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:54.870 01:30:07 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:54.870 01:30:07 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:54.870 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.870 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.870 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.870 01:30:07 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:54.870 01:30:07 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:54.870 01:30:07 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:54.870 01:30:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:54.870 01:30:07 -- common/autotest_common.sh@10 -- # set +x 00:08:54.870 01:30:07 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:54.870 01:30:07 -- target/referrals.sh@21 -- # sort 00:08:54.870 01:30:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:54.870 01:30:07 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:54.870 01:30:07 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:54.870 01:30:07 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:54.870 01:30:07 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:54.870 01:30:07 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:54.870 01:30:07 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.870 01:30:07 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:54.870 01:30:07 -- target/referrals.sh@26 -- # sort 00:08:54.870 01:30:07 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:54.870 01:30:07 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:54.870 01:30:07 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:54.870 01:30:07 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:54.870 01:30:07 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:54.870 01:30:07 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:54.870 01:30:07 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:55.128 01:30:08 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:55.128 01:30:08 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:55.128 01:30:08 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:55.128 01:30:08 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:55.128 01:30:08 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.128 01:30:08 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:55.128 01:30:08 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:55.128 01:30:08 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:55.128 01:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.128 01:30:08 -- common/autotest_common.sh@10 -- # set +x 00:08:55.128 01:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.128 01:30:08 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:55.128 01:30:08 -- target/referrals.sh@82 -- # jq length 00:08:55.128 01:30:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:55.128 01:30:08 -- common/autotest_common.sh@10 -- # set +x 00:08:55.128 01:30:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:55.128 01:30:08 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:55.128 01:30:08 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:55.128 01:30:08 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:55.128 01:30:08 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:55.128 01:30:08 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:55.128 01:30:08 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:55.128 01:30:08 -- target/referrals.sh@26 -- # sort 00:08:55.387 01:30:08 -- target/referrals.sh@26 -- # echo 00:08:55.387 01:30:08 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:55.387 01:30:08 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:55.387 01:30:08 -- target/referrals.sh@86 -- # nvmftestfini 00:08:55.387 01:30:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:55.387 01:30:08 -- nvmf/common.sh@116 -- # sync 00:08:55.387 01:30:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:55.387 01:30:08 -- nvmf/common.sh@119 -- # set +e 00:08:55.387 01:30:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:55.387 01:30:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:55.387 rmmod nvme_tcp 00:08:55.387 rmmod nvme_fabrics 00:08:55.387 rmmod nvme_keyring 00:08:55.387 01:30:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:55.387 01:30:08 -- nvmf/common.sh@123 -- # set -e 00:08:55.387 01:30:08 -- nvmf/common.sh@124 -- # return 0 00:08:55.387 01:30:08 -- nvmf/common.sh@477 -- # '[' -n 3685737 ']' 00:08:55.387 01:30:08 -- nvmf/common.sh@478 -- # killprocess 3685737 00:08:55.387 01:30:08 -- common/autotest_common.sh@926 -- # '[' -z 3685737 ']' 00:08:55.387 01:30:08 -- common/autotest_common.sh@930 -- # kill -0 3685737 00:08:55.387 01:30:08 -- common/autotest_common.sh@931 -- # uname 00:08:55.387 01:30:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:55.387 01:30:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3685737 00:08:55.387 01:30:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:55.387 01:30:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:55.387 01:30:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3685737' 00:08:55.387 killing process with pid 3685737 00:08:55.387 01:30:08 -- common/autotest_common.sh@945 -- # kill 3685737 00:08:55.387 01:30:08 -- common/autotest_common.sh@950 -- # wait 3685737 00:08:55.646 01:30:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:55.646 01:30:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:55.646 01:30:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:55.646 01:30:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.646 01:30:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:55.646 01:30:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.646 01:30:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.646 01:30:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.550 01:30:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:57.550 00:08:57.550 real 0m7.053s 00:08:57.550 user 0m11.663s 00:08:57.550 sys 0m2.138s 00:08:57.550 01:30:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:57.550 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.550 ************************************ 00:08:57.551 END TEST nvmf_referrals 00:08:57.551 ************************************ 00:08:57.551 01:30:10 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:57.551 01:30:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:57.551 01:30:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.551 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:08:57.551 ************************************ 00:08:57.551 START TEST nvmf_connect_disconnect 00:08:57.551 ************************************ 00:08:57.551 01:30:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:57.810 * Looking for test storage... 00:08:57.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.810 01:30:10 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.810 01:30:10 -- nvmf/common.sh@7 -- # uname -s 00:08:57.810 01:30:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.810 01:30:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.810 01:30:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.810 01:30:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.810 01:30:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.810 01:30:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.810 01:30:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.810 01:30:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.810 01:30:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.810 01:30:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.810 01:30:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.810 01:30:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.810 01:30:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.810 01:30:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.810 01:30:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.810 01:30:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.810 01:30:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.810 01:30:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.810 01:30:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.810 01:30:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.810 01:30:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.810 01:30:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.810 01:30:10 -- paths/export.sh@5 -- # export PATH 00:08:57.810 01:30:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.810 01:30:10 -- nvmf/common.sh@46 -- # : 0 00:08:57.810 01:30:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:57.810 01:30:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:57.810 01:30:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:57.810 01:30:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.810 01:30:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.810 01:30:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:57.810 01:30:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:57.810 01:30:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:57.810 01:30:10 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.810 01:30:10 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.810 01:30:10 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:57.810 01:30:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:57.810 01:30:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.810 01:30:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:57.810 01:30:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:57.810 01:30:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:57.810 01:30:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.810 01:30:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:57.810 01:30:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.810 01:30:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:57.810 01:30:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:57.810 01:30:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:57.810 01:30:10 -- common/autotest_common.sh@10 -- # set +x 00:08:59.713 01:30:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:59.713 01:30:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:59.713 01:30:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:59.713 01:30:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:59.713 01:30:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:59.713 01:30:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:59.713 01:30:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:59.713 01:30:12 -- nvmf/common.sh@294 -- # net_devs=() 00:08:59.713 01:30:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:59.713 01:30:12 -- nvmf/common.sh@295 -- # e810=() 00:08:59.713 01:30:12 -- nvmf/common.sh@295 -- # local -ga e810 00:08:59.713 01:30:12 -- nvmf/common.sh@296 -- # x722=() 00:08:59.713 01:30:12 -- nvmf/common.sh@296 -- # local -ga x722 00:08:59.713 01:30:12 -- nvmf/common.sh@297 -- # mlx=() 00:08:59.713 01:30:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:59.713 01:30:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.713 01:30:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:59.713 01:30:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:59.713 01:30:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.713 01:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:59.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:59.713 01:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:59.713 01:30:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:59.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:59.713 01:30:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.713 01:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.713 01:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.713 01:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:59.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:59.713 01:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.713 01:30:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:59.713 01:30:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.713 01:30:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.713 01:30:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:59.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:59.713 01:30:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.713 01:30:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:59.713 01:30:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:59.713 01:30:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:59.713 01:30:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.713 01:30:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.713 01:30:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.713 01:30:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:59.713 01:30:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.713 01:30:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.713 01:30:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:59.713 01:30:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.713 01:30:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.713 01:30:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:59.713 01:30:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:59.713 01:30:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.713 01:30:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.972 01:30:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.972 01:30:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.972 01:30:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:59.972 01:30:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.972 01:30:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.972 01:30:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.972 01:30:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:59.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:08:59.972 00:08:59.972 --- 10.0.0.2 ping statistics --- 00:08:59.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.972 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:59.972 01:30:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:08:59.972 00:08:59.972 --- 10.0.0.1 ping statistics --- 00:08:59.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.972 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:08:59.972 01:30:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.972 01:30:12 -- nvmf/common.sh@410 -- # return 0 00:08:59.972 01:30:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:59.972 01:30:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.972 01:30:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:59.972 01:30:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:59.972 01:30:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.972 01:30:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:59.972 01:30:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:59.972 01:30:12 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:59.972 01:30:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:59.972 01:30:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:59.972 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.972 01:30:12 -- nvmf/common.sh@469 -- # nvmfpid=3688567 00:08:59.972 01:30:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:59.972 01:30:12 -- nvmf/common.sh@470 -- # waitforlisten 3688567 00:08:59.972 01:30:12 -- common/autotest_common.sh@819 -- # '[' -z 3688567 ']' 00:08:59.972 01:30:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.972 01:30:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:59.972 01:30:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.972 01:30:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:59.972 01:30:12 -- common/autotest_common.sh@10 -- # set +x 00:08:59.972 [2024-07-23 01:30:13.003957] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:59.972 [2024-07-23 01:30:13.004036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.972 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.232 [2024-07-23 01:30:13.079496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.232 [2024-07-23 01:30:13.172132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:00.232 [2024-07-23 01:30:13.172314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.232 [2024-07-23 01:30:13.172334] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.232 [2024-07-23 01:30:13.172348] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.232 [2024-07-23 01:30:13.172458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.232 [2024-07-23 01:30:13.172514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.232 [2024-07-23 01:30:13.172566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.232 [2024-07-23 01:30:13.172569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.166 01:30:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:01.166 01:30:13 -- common/autotest_common.sh@852 -- # return 0 00:09:01.166 01:30:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:01.166 01:30:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:01.166 01:30:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.166 01:30:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.166 01:30:13 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:01.166 01:30:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.166 01:30:13 -- common/autotest_common.sh@10 -- # set +x 00:09:01.166 [2024-07-23 01:30:13.998291] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.166 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.166 01:30:14 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:01.166 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.166 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.166 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.166 01:30:14 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:01.166 01:30:14 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:01.166 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.166 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.166 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.167 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.167 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.167 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:01.167 01:30:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:01.167 01:30:14 -- common/autotest_common.sh@10 -- # set +x 00:09:01.167 [2024-07-23 01:30:14.055530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:01.167 01:30:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:01.167 01:30:14 -- target/connect_disconnect.sh@34 -- # set +x 00:09:03.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.067 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.977 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.885 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.840 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.288 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.430 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.310 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.733 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:45.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.086 01:34:04 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:52.086 01:34:04 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:52.086 01:34:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:52.086 01:34:04 -- nvmf/common.sh@116 -- # sync 00:12:52.086 01:34:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:52.086 01:34:04 -- nvmf/common.sh@119 -- # set +e 00:12:52.086 01:34:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:52.086 01:34:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:52.086 rmmod nvme_tcp 00:12:52.086 rmmod nvme_fabrics 00:12:52.086 rmmod nvme_keyring 00:12:52.086 01:34:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:52.086 01:34:05 -- nvmf/common.sh@123 -- # set -e 00:12:52.086 01:34:05 -- nvmf/common.sh@124 -- # return 0 00:12:52.086 01:34:05 -- nvmf/common.sh@477 -- # '[' -n 3688567 ']' 00:12:52.086 01:34:05 -- nvmf/common.sh@478 -- # killprocess 3688567 00:12:52.086 01:34:05 -- common/autotest_common.sh@926 -- # '[' -z 3688567 ']' 00:12:52.086 01:34:05 -- common/autotest_common.sh@930 -- # kill -0 3688567 00:12:52.086 01:34:05 -- common/autotest_common.sh@931 -- # uname 00:12:52.086 01:34:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:52.086 01:34:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3688567 00:12:52.086 01:34:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:52.086 01:34:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:52.086 01:34:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3688567' 00:12:52.086 killing process with pid 3688567 00:12:52.086 01:34:05 -- common/autotest_common.sh@945 -- # kill 3688567 00:12:52.086 01:34:05 -- common/autotest_common.sh@950 -- # wait 3688567 00:12:52.345 01:34:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:52.345 01:34:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:52.345 01:34:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:52.345 01:34:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.345 01:34:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:52.345 01:34:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.345 01:34:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.345 01:34:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.878 01:34:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:54.878 00:12:54.878 real 3m56.745s 00:12:54.878 user 15m2.333s 00:12:54.878 sys 0m33.661s 00:12:54.878 01:34:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:54.878 01:34:07 -- common/autotest_common.sh@10 -- # set +x 00:12:54.878 ************************************ 00:12:54.878 END TEST nvmf_connect_disconnect 00:12:54.878 ************************************ 00:12:54.878 01:34:07 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.878 01:34:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:54.878 01:34:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:54.878 01:34:07 -- common/autotest_common.sh@10 -- # set +x 00:12:54.878 ************************************ 00:12:54.878 START TEST nvmf_multitarget 00:12:54.878 ************************************ 00:12:54.878 01:34:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:54.878 * Looking for test storage... 00:12:54.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:54.878 01:34:07 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:54.878 01:34:07 -- nvmf/common.sh@7 -- # uname -s 00:12:54.878 01:34:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:54.878 01:34:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:54.878 01:34:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:54.878 01:34:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:54.878 01:34:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:54.878 01:34:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:54.878 01:34:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:54.878 01:34:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:54.878 01:34:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:54.878 01:34:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:54.878 01:34:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.878 01:34:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:54.878 01:34:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:54.878 01:34:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:54.878 01:34:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:54.878 01:34:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:54.878 01:34:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:54.878 01:34:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:54.878 01:34:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:54.878 01:34:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.878 01:34:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.879 01:34:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.879 01:34:07 -- paths/export.sh@5 -- # export PATH 00:12:54.879 01:34:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:54.879 01:34:07 -- nvmf/common.sh@46 -- # : 0 00:12:54.879 01:34:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:54.879 01:34:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:54.879 01:34:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:54.879 01:34:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:54.879 01:34:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:54.879 01:34:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:54.879 01:34:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:54.879 01:34:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:54.879 01:34:07 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:54.879 01:34:07 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:54.879 01:34:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:54.879 01:34:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:54.879 01:34:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:54.879 01:34:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:54.879 01:34:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:54.879 01:34:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:54.879 01:34:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:54.879 01:34:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.879 01:34:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:54.879 01:34:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:54.879 01:34:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:54.879 01:34:07 -- common/autotest_common.sh@10 -- # set +x 00:12:56.782 01:34:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:56.782 01:34:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:56.782 01:34:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:56.782 01:34:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:56.782 01:34:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:56.782 01:34:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:56.782 01:34:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:56.782 01:34:09 -- nvmf/common.sh@294 -- # net_devs=() 00:12:56.782 01:34:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:56.782 01:34:09 -- nvmf/common.sh@295 -- # e810=() 00:12:56.782 01:34:09 -- nvmf/common.sh@295 -- # local -ga e810 00:12:56.782 01:34:09 -- nvmf/common.sh@296 -- # x722=() 00:12:56.782 01:34:09 -- nvmf/common.sh@296 -- # local -ga x722 00:12:56.782 01:34:09 -- nvmf/common.sh@297 -- # mlx=() 00:12:56.782 01:34:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:56.782 01:34:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:56.782 01:34:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:56.782 01:34:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:56.782 01:34:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:56.782 01:34:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:56.782 01:34:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:56.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:56.782 01:34:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:56.782 01:34:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:56.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:56.782 01:34:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:56.782 01:34:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:56.782 01:34:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:56.782 01:34:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.782 01:34:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:56.782 01:34:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.782 01:34:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:56.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:56.782 01:34:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.782 01:34:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:56.782 01:34:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:56.782 01:34:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:56.783 01:34:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:56.783 01:34:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:56.783 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:56.783 01:34:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:56.783 01:34:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:56.783 01:34:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:56.783 01:34:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:56.783 01:34:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:56.783 01:34:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:56.783 01:34:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:56.783 01:34:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:56.783 01:34:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:56.783 01:34:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:56.783 01:34:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:56.783 01:34:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:56.783 01:34:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:56.783 01:34:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:56.783 01:34:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:56.783 01:34:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:56.783 01:34:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:56.783 01:34:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:56.783 01:34:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:56.783 01:34:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:56.783 01:34:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:56.783 01:34:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:56.783 01:34:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:56.783 01:34:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:56.783 01:34:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:56.783 01:34:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:56.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:56.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:12:56.783 00:12:56.783 --- 10.0.0.2 ping statistics --- 00:12:56.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.783 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:56.783 01:34:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:56.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:56.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:12:56.783 00:12:56.783 --- 10.0.0.1 ping statistics --- 00:12:56.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:56.783 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:56.783 01:34:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:56.783 01:34:09 -- nvmf/common.sh@410 -- # return 0 00:12:56.783 01:34:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:56.783 01:34:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:56.783 01:34:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:56.783 01:34:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:56.783 01:34:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:56.783 01:34:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:56.783 01:34:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:56.783 01:34:09 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:56.783 01:34:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:56.783 01:34:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:56.783 01:34:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.783 01:34:09 -- nvmf/common.sh@469 -- # nvmfpid=3720555 00:12:56.783 01:34:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:56.783 01:34:09 -- nvmf/common.sh@470 -- # waitforlisten 3720555 00:12:56.783 01:34:09 -- common/autotest_common.sh@819 -- # '[' -z 3720555 ']' 00:12:56.783 01:34:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.783 01:34:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:56.783 01:34:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.783 01:34:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:56.783 01:34:09 -- common/autotest_common.sh@10 -- # set +x 00:12:56.783 [2024-07-23 01:34:09.602655] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:56.783 [2024-07-23 01:34:09.602773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:56.783 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.783 [2024-07-23 01:34:09.672607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:56.783 [2024-07-23 01:34:09.764678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:56.783 [2024-07-23 01:34:09.764853] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:56.783 [2024-07-23 01:34:09.764876] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:56.783 [2024-07-23 01:34:09.764892] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:56.783 [2024-07-23 01:34:09.764987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.783 [2024-07-23 01:34:09.765043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:56.783 [2024-07-23 01:34:09.765099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:56.783 [2024-07-23 01:34:09.765102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.715 01:34:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:57.715 01:34:10 -- common/autotest_common.sh@852 -- # return 0 00:12:57.715 01:34:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:57.715 01:34:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:57.715 01:34:10 -- common/autotest_common.sh@10 -- # set +x 00:12:57.715 01:34:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.715 01:34:10 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:57.715 01:34:10 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.715 01:34:10 -- target/multitarget.sh@21 -- # jq length 00:12:57.715 01:34:10 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:57.715 01:34:10 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:57.715 "nvmf_tgt_1" 00:12:57.715 01:34:10 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:57.973 "nvmf_tgt_2" 00:12:57.973 01:34:10 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:57.973 01:34:10 -- target/multitarget.sh@28 -- # jq length 00:12:57.973 01:34:10 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:57.973 01:34:10 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:58.231 true 00:12:58.231 01:34:11 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:58.231 true 00:12:58.231 01:34:11 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:58.231 01:34:11 -- target/multitarget.sh@35 -- # jq length 00:12:58.489 01:34:11 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:58.489 01:34:11 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:58.489 01:34:11 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:58.489 01:34:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:58.489 01:34:11 -- nvmf/common.sh@116 -- # sync 00:12:58.489 01:34:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:58.489 01:34:11 -- nvmf/common.sh@119 -- # set +e 00:12:58.489 01:34:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:58.489 01:34:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:58.489 rmmod nvme_tcp 00:12:58.489 rmmod nvme_fabrics 00:12:58.489 rmmod nvme_keyring 00:12:58.489 01:34:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:58.489 01:34:11 -- nvmf/common.sh@123 -- # set -e 00:12:58.489 01:34:11 -- nvmf/common.sh@124 -- # return 0 00:12:58.489 01:34:11 -- nvmf/common.sh@477 -- # '[' -n 3720555 ']' 00:12:58.489 01:34:11 -- nvmf/common.sh@478 -- # killprocess 3720555 00:12:58.489 01:34:11 -- common/autotest_common.sh@926 -- # '[' -z 3720555 ']' 00:12:58.489 01:34:11 -- common/autotest_common.sh@930 -- # kill -0 3720555 00:12:58.489 01:34:11 -- common/autotest_common.sh@931 -- # uname 00:12:58.489 01:34:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:58.489 01:34:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3720555 00:12:58.489 01:34:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:58.489 01:34:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:58.489 01:34:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3720555' 00:12:58.489 killing process with pid 3720555 00:12:58.489 01:34:11 -- common/autotest_common.sh@945 -- # kill 3720555 00:12:58.489 01:34:11 -- common/autotest_common.sh@950 -- # wait 3720555 00:12:58.749 01:34:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.749 01:34:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.749 01:34:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.749 01:34:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.749 01:34:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.749 01:34:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.749 01:34:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.749 01:34:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.654 01:34:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:00.654 00:13:00.654 real 0m6.283s 00:13:00.654 user 0m8.988s 00:13:00.654 sys 0m1.945s 00:13:00.654 01:34:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.654 01:34:13 -- common/autotest_common.sh@10 -- # set +x 00:13:00.654 ************************************ 00:13:00.654 END TEST nvmf_multitarget 00:13:00.654 ************************************ 00:13:00.654 01:34:13 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.654 01:34:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:00.654 01:34:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.654 01:34:13 -- common/autotest_common.sh@10 -- # set +x 00:13:00.654 ************************************ 00:13:00.654 START TEST nvmf_rpc 00:13:00.654 ************************************ 00:13:00.654 01:34:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:00.654 * Looking for test storage... 00:13:00.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.654 01:34:13 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.654 01:34:13 -- nvmf/common.sh@7 -- # uname -s 00:13:00.654 01:34:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.654 01:34:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.913 01:34:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.913 01:34:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.913 01:34:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.913 01:34:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.913 01:34:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.913 01:34:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.913 01:34:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.913 01:34:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.913 01:34:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.913 01:34:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.913 01:34:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.913 01:34:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.913 01:34:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.913 01:34:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.913 01:34:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.913 01:34:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.913 01:34:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.913 01:34:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.913 01:34:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.913 01:34:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.913 01:34:13 -- paths/export.sh@5 -- # export PATH 00:13:00.913 01:34:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.913 01:34:13 -- nvmf/common.sh@46 -- # : 0 00:13:00.913 01:34:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.913 01:34:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.913 01:34:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.913 01:34:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.913 01:34:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.913 01:34:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.913 01:34:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.913 01:34:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.913 01:34:13 -- target/rpc.sh@11 -- # loops=5 00:13:00.913 01:34:13 -- target/rpc.sh@23 -- # nvmftestinit 00:13:00.913 01:34:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.913 01:34:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.913 01:34:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.913 01:34:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.913 01:34:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.913 01:34:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.913 01:34:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.913 01:34:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.913 01:34:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:00.913 01:34:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:00.913 01:34:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:00.913 01:34:13 -- common/autotest_common.sh@10 -- # set +x 00:13:02.878 01:34:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.878 01:34:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:02.878 01:34:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:02.878 01:34:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:02.878 01:34:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:02.878 01:34:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:02.878 01:34:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:02.878 01:34:15 -- nvmf/common.sh@294 -- # net_devs=() 00:13:02.878 01:34:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:02.878 01:34:15 -- nvmf/common.sh@295 -- # e810=() 00:13:02.878 01:34:15 -- nvmf/common.sh@295 -- # local -ga e810 00:13:02.878 01:34:15 -- nvmf/common.sh@296 -- # x722=() 00:13:02.878 01:34:15 -- nvmf/common.sh@296 -- # local -ga x722 00:13:02.878 01:34:15 -- nvmf/common.sh@297 -- # mlx=() 00:13:02.878 01:34:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:02.878 01:34:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.878 01:34:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.879 01:34:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.879 01:34:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.879 01:34:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.879 01:34:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.879 01:34:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.879 01:34:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.879 01:34:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.879 01:34:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.879 01:34:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.879 01:34:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.879 01:34:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.879 01:34:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.879 01:34:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.879 01:34:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.879 01:34:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.879 01:34:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:02.879 01:34:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:02.879 01:34:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.879 01:34:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.879 01:34:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:02.879 01:34:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.879 01:34:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.879 01:34:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:02.879 01:34:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.879 01:34:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.879 01:34:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:02.879 01:34:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:02.879 01:34:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.879 01:34:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.879 01:34:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.879 01:34:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.879 01:34:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:02.879 01:34:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.879 01:34:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.879 01:34:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.879 01:34:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:02.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:13:02.879 00:13:02.879 --- 10.0.0.2 ping statistics --- 00:13:02.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.879 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:13:02.879 01:34:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:13:02.879 00:13:02.879 --- 10.0.0.1 ping statistics --- 00:13:02.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.879 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:02.879 01:34:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.879 01:34:15 -- nvmf/common.sh@410 -- # return 0 00:13:02.879 01:34:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.879 01:34:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.879 01:34:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.879 01:34:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.879 01:34:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.879 01:34:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.879 01:34:15 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:02.879 01:34:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.879 01:34:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:02.879 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:13:02.879 01:34:15 -- nvmf/common.sh@469 -- # nvmfpid=3722806 00:13:02.879 01:34:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.879 01:34:15 -- nvmf/common.sh@470 -- # waitforlisten 3722806 00:13:02.879 01:34:15 -- common/autotest_common.sh@819 -- # '[' -z 3722806 ']' 00:13:02.879 01:34:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.879 01:34:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.879 01:34:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.879 01:34:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.879 01:34:15 -- common/autotest_common.sh@10 -- # set +x 00:13:02.879 [2024-07-23 01:34:15.972624] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:02.879 [2024-07-23 01:34:15.972711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.139 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.139 [2024-07-23 01:34:16.046517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.139 [2024-07-23 01:34:16.140552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:03.139 [2024-07-23 01:34:16.140730] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.139 [2024-07-23 01:34:16.140752] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.139 [2024-07-23 01:34:16.140767] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.139 [2024-07-23 01:34:16.140838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.139 [2024-07-23 01:34:16.140904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.139 [2024-07-23 01:34:16.140955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.139 [2024-07-23 01:34:16.140958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.074 01:34:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:04.074 01:34:16 -- common/autotest_common.sh@852 -- # return 0 00:13:04.074 01:34:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:04.074 01:34:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:04.074 01:34:16 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:04.074 01:34:16 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:04.074 01:34:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:16 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:16 -- target/rpc.sh@26 -- # stats='{ 00:13:04.074 "tick_rate": 2700000000, 00:13:04.074 "poll_groups": [ 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_0", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_1", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_2", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_3", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [] 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 }' 00:13:04.074 01:34:16 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:04.074 01:34:16 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:04.074 01:34:16 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:04.074 01:34:16 -- target/rpc.sh@15 -- # wc -l 00:13:04.074 01:34:16 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:04.074 01:34:16 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:04.074 01:34:17 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:04.074 01:34:17 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 [2024-07-23 01:34:17.022450] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@33 -- # stats='{ 00:13:04.074 "tick_rate": 2700000000, 00:13:04.074 "poll_groups": [ 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_0", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [ 00:13:04.074 { 00:13:04.074 "trtype": "TCP" 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_1", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [ 00:13:04.074 { 00:13:04.074 "trtype": "TCP" 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_2", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [ 00:13:04.074 { 00:13:04.074 "trtype": "TCP" 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 }, 00:13:04.074 { 00:13:04.074 "name": "nvmf_tgt_poll_group_3", 00:13:04.074 "admin_qpairs": 0, 00:13:04.074 "io_qpairs": 0, 00:13:04.074 "current_admin_qpairs": 0, 00:13:04.074 "current_io_qpairs": 0, 00:13:04.074 "pending_bdev_io": 0, 00:13:04.074 "completed_nvme_io": 0, 00:13:04.074 "transports": [ 00:13:04.074 { 00:13:04.074 "trtype": "TCP" 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 } 00:13:04.074 ] 00:13:04.074 }' 00:13:04.074 01:34:17 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.074 01:34:17 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:04.074 01:34:17 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:04.074 01:34:17 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:04.074 01:34:17 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:04.074 01:34:17 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:04.074 01:34:17 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:04.074 01:34:17 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:04.074 01:34:17 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 Malloc1 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:04.074 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.074 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.074 [2024-07-23 01:34:17.161722] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:04.074 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.074 01:34:17 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:04.074 01:34:17 -- common/autotest_common.sh@640 -- # local es=0 00:13:04.074 01:34:17 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:04.074 01:34:17 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:04.074 01:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.074 01:34:17 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:04.074 01:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.074 01:34:17 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:04.074 01:34:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:04.074 01:34:17 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:04.074 01:34:17 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:04.074 01:34:17 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:04.333 [2024-07-23 01:34:17.184239] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:04.333 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:04.333 could not add new controller: failed to write to nvme-fabrics device 00:13:04.333 01:34:17 -- common/autotest_common.sh@643 -- # es=1 00:13:04.333 01:34:17 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:04.333 01:34:17 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:04.333 01:34:17 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:04.333 01:34:17 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:04.333 01:34:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:04.333 01:34:17 -- common/autotest_common.sh@10 -- # set +x 00:13:04.333 01:34:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:04.333 01:34:17 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.899 01:34:17 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.899 01:34:17 -- common/autotest_common.sh@1177 -- # local i=0 00:13:04.899 01:34:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.899 01:34:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:04.899 01:34:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:06.799 01:34:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:06.799 01:34:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:06.799 01:34:19 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.799 01:34:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:06.799 01:34:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.799 01:34:19 -- common/autotest_common.sh@1187 -- # return 0 00:13:06.799 01:34:19 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.057 01:34:19 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.057 01:34:19 -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.057 01:34:19 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:07.057 01:34:19 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.057 01:34:19 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:07.057 01:34:19 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:07.057 01:34:19 -- common/autotest_common.sh@1210 -- # return 0 00:13:07.057 01:34:19 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.057 01:34:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.057 01:34:19 -- common/autotest_common.sh@10 -- # set +x 00:13:07.057 01:34:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.057 01:34:19 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.057 01:34:19 -- common/autotest_common.sh@640 -- # local es=0 00:13:07.057 01:34:19 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.057 01:34:19 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:07.057 01:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.057 01:34:19 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:07.057 01:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.057 01:34:19 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:07.057 01:34:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:07.057 01:34:19 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:07.057 01:34:19 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:07.057 01:34:19 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.057 [2024-07-23 01:34:19.984849] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:07.057 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:07.057 could not add new controller: failed to write to nvme-fabrics device 00:13:07.057 01:34:20 -- common/autotest_common.sh@643 -- # es=1 00:13:07.057 01:34:20 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:07.057 01:34:20 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:07.057 01:34:20 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:07.057 01:34:20 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:07.057 01:34:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:07.057 01:34:20 -- common/autotest_common.sh@10 -- # set +x 00:13:07.057 01:34:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:07.057 01:34:20 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.623 01:34:20 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.623 01:34:20 -- common/autotest_common.sh@1177 -- # local i=0 00:13:07.623 01:34:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.623 01:34:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:07.623 01:34:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:10.153 01:34:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:10.153 01:34:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:10.153 01:34:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:10.153 01:34:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:10.153 01:34:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.153 01:34:22 -- common/autotest_common.sh@1187 -- # return 0 00:13:10.153 01:34:22 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:10.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.153 01:34:22 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:10.153 01:34:22 -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.153 01:34:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:10.153 01:34:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.153 01:34:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:10.153 01:34:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:10.153 01:34:22 -- common/autotest_common.sh@1210 -- # return 0 00:13:10.153 01:34:22 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:10.153 01:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.153 01:34:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 01:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.153 01:34:22 -- target/rpc.sh@81 -- # seq 1 5 00:13:10.153 01:34:22 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:10.153 01:34:22 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:10.153 01:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.153 01:34:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 01:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.153 01:34:22 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.153 01:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.153 01:34:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 [2024-07-23 01:34:22.799958] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.153 01:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.153 01:34:22 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:10.153 01:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.153 01:34:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 01:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.153 01:34:22 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:10.153 01:34:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.153 01:34:22 -- common/autotest_common.sh@10 -- # set +x 00:13:10.153 01:34:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.153 01:34:22 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.410 01:34:23 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.410 01:34:23 -- common/autotest_common.sh@1177 -- # local i=0 00:13:10.410 01:34:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.410 01:34:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:10.410 01:34:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:12.936 01:34:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:12.936 01:34:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:12.936 01:34:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.936 01:34:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:12.936 01:34:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.936 01:34:25 -- common/autotest_common.sh@1187 -- # return 0 00:13:12.936 01:34:25 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.936 01:34:25 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.936 01:34:25 -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.936 01:34:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:12.936 01:34:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.936 01:34:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:12.936 01:34:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.936 01:34:25 -- common/autotest_common.sh@1210 -- # return 0 00:13:12.936 01:34:25 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.936 01:34:25 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 [2024-07-23 01:34:25.570783] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.936 01:34:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.936 01:34:25 -- common/autotest_common.sh@10 -- # set +x 00:13:12.936 01:34:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:12.936 01:34:25 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.193 01:34:26 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.193 01:34:26 -- common/autotest_common.sh@1177 -- # local i=0 00:13:13.193 01:34:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.193 01:34:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:13.193 01:34:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:15.715 01:34:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:15.715 01:34:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:15.715 01:34:28 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.715 01:34:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:15.715 01:34:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.715 01:34:28 -- common/autotest_common.sh@1187 -- # return 0 00:13:15.715 01:34:28 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.715 01:34:28 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.715 01:34:28 -- common/autotest_common.sh@1198 -- # local i=0 00:13:15.715 01:34:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:15.715 01:34:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.715 01:34:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:15.715 01:34:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.715 01:34:28 -- common/autotest_common.sh@1210 -- # return 0 00:13:15.715 01:34:28 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.715 01:34:28 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 [2024-07-23 01:34:28.296196] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.715 01:34:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:15.715 01:34:28 -- common/autotest_common.sh@10 -- # set +x 00:13:15.715 01:34:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:15.715 01:34:28 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:15.971 01:34:28 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:15.971 01:34:28 -- common/autotest_common.sh@1177 -- # local i=0 00:13:15.971 01:34:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:15.971 01:34:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:15.971 01:34:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:17.866 01:34:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:17.866 01:34:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:17.866 01:34:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:17.866 01:34:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:17.866 01:34:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:17.866 01:34:30 -- common/autotest_common.sh@1187 -- # return 0 00:13:17.866 01:34:30 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.125 01:34:31 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.125 01:34:31 -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.125 01:34:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:18.125 01:34:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.125 01:34:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:18.125 01:34:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.125 01:34:31 -- common/autotest_common.sh@1210 -- # return 0 00:13:18.125 01:34:31 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.125 01:34:31 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 [2024-07-23 01:34:31.075749] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.125 01:34:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.125 01:34:31 -- common/autotest_common.sh@10 -- # set +x 00:13:18.125 01:34:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.125 01:34:31 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.690 01:34:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.690 01:34:31 -- common/autotest_common.sh@1177 -- # local i=0 00:13:18.690 01:34:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.690 01:34:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:18.690 01:34:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.217 01:34:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.217 01:34:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.217 01:34:33 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.217 01:34:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.217 01:34:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.217 01:34:33 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.217 01:34:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.217 01:34:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.217 01:34:33 -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.217 01:34:33 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:21.217 01:34:33 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.217 01:34:33 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:21.217 01:34:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.217 01:34:33 -- common/autotest_common.sh@1210 -- # return 0 00:13:21.217 01:34:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.217 01:34:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 [2024-07-23 01:34:33.971915] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.217 01:34:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.217 01:34:33 -- common/autotest_common.sh@10 -- # set +x 00:13:21.217 01:34:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.217 01:34:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.783 01:34:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:21.783 01:34:34 -- common/autotest_common.sh@1177 -- # local i=0 00:13:21.783 01:34:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:21.783 01:34:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:21.783 01:34:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:23.712 01:34:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:23.712 01:34:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:23.712 01:34:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:23.712 01:34:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:23.712 01:34:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:23.712 01:34:36 -- common/autotest_common.sh@1187 -- # return 0 00:13:23.712 01:34:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:23.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.712 01:34:36 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:23.713 01:34:36 -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.713 01:34:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:23.713 01:34:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.713 01:34:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:23.713 01:34:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:23.713 01:34:36 -- common/autotest_common.sh@1210 -- # return 0 00:13:23.713 01:34:36 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@99 -- # seq 1 5 00:13:23.713 01:34:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.713 01:34:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 [2024-07-23 01:34:36.741811] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.713 01:34:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.713 01:34:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.713 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.713 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.713 [2024-07-23 01:34:36.789923] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.977 01:34:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 [2024-07-23 01:34:36.838076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.977 01:34:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 [2024-07-23 01:34:36.886223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:23.977 01:34:36 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 [2024-07-23 01:34:36.934400] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:23.977 01:34:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.977 01:34:36 -- common/autotest_common.sh@10 -- # set +x 00:13:23.977 01:34:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.977 01:34:36 -- target/rpc.sh@110 -- # stats='{ 00:13:23.977 "tick_rate": 2700000000, 00:13:23.977 "poll_groups": [ 00:13:23.977 { 00:13:23.977 "name": "nvmf_tgt_poll_group_0", 00:13:23.977 "admin_qpairs": 2, 00:13:23.977 "io_qpairs": 84, 00:13:23.977 "current_admin_qpairs": 0, 00:13:23.977 "current_io_qpairs": 0, 00:13:23.977 "pending_bdev_io": 0, 00:13:23.977 "completed_nvme_io": 135, 00:13:23.977 "transports": [ 00:13:23.977 { 00:13:23.977 "trtype": "TCP" 00:13:23.977 } 00:13:23.977 ] 00:13:23.977 }, 00:13:23.977 { 00:13:23.977 "name": "nvmf_tgt_poll_group_1", 00:13:23.977 "admin_qpairs": 2, 00:13:23.977 "io_qpairs": 84, 00:13:23.977 "current_admin_qpairs": 0, 00:13:23.977 "current_io_qpairs": 0, 00:13:23.977 "pending_bdev_io": 0, 00:13:23.977 "completed_nvme_io": 135, 00:13:23.977 "transports": [ 00:13:23.977 { 00:13:23.977 "trtype": "TCP" 00:13:23.978 } 00:13:23.978 ] 00:13:23.978 }, 00:13:23.978 { 00:13:23.978 "name": "nvmf_tgt_poll_group_2", 00:13:23.978 "admin_qpairs": 1, 00:13:23.978 "io_qpairs": 84, 00:13:23.978 "current_admin_qpairs": 0, 00:13:23.978 "current_io_qpairs": 0, 00:13:23.978 "pending_bdev_io": 0, 00:13:23.978 "completed_nvme_io": 234, 00:13:23.978 "transports": [ 00:13:23.978 { 00:13:23.978 "trtype": "TCP" 00:13:23.978 } 00:13:23.978 ] 00:13:23.978 }, 00:13:23.978 { 00:13:23.978 "name": "nvmf_tgt_poll_group_3", 00:13:23.978 "admin_qpairs": 2, 00:13:23.978 "io_qpairs": 84, 00:13:23.978 "current_admin_qpairs": 0, 00:13:23.978 "current_io_qpairs": 0, 00:13:23.978 "pending_bdev_io": 0, 00:13:23.978 "completed_nvme_io": 182, 00:13:23.978 "transports": [ 00:13:23.978 { 00:13:23.978 "trtype": "TCP" 00:13:23.978 } 00:13:23.978 ] 00:13:23.978 } 00:13:23.978 ] 00:13:23.978 }' 00:13:23.978 01:34:36 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:23.978 01:34:36 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:23.978 01:34:36 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:23.978 01:34:36 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.978 01:34:37 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:23.978 01:34:37 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:23.978 01:34:37 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:23.978 01:34:37 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:23.978 01:34:37 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:23.978 01:34:37 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:23.978 01:34:37 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:23.978 01:34:37 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:23.978 01:34:37 -- target/rpc.sh@123 -- # nvmftestfini 00:13:23.978 01:34:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:23.978 01:34:37 -- nvmf/common.sh@116 -- # sync 00:13:24.236 01:34:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:24.236 01:34:37 -- nvmf/common.sh@119 -- # set +e 00:13:24.236 01:34:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:24.236 01:34:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:24.236 rmmod nvme_tcp 00:13:24.236 rmmod nvme_fabrics 00:13:24.236 rmmod nvme_keyring 00:13:24.236 01:34:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:24.236 01:34:37 -- nvmf/common.sh@123 -- # set -e 00:13:24.236 01:34:37 -- nvmf/common.sh@124 -- # return 0 00:13:24.236 01:34:37 -- nvmf/common.sh@477 -- # '[' -n 3722806 ']' 00:13:24.236 01:34:37 -- nvmf/common.sh@478 -- # killprocess 3722806 00:13:24.236 01:34:37 -- common/autotest_common.sh@926 -- # '[' -z 3722806 ']' 00:13:24.236 01:34:37 -- common/autotest_common.sh@930 -- # kill -0 3722806 00:13:24.236 01:34:37 -- common/autotest_common.sh@931 -- # uname 00:13:24.236 01:34:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:24.236 01:34:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3722806 00:13:24.236 01:34:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:24.236 01:34:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:24.236 01:34:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3722806' 00:13:24.236 killing process with pid 3722806 00:13:24.236 01:34:37 -- common/autotest_common.sh@945 -- # kill 3722806 00:13:24.236 01:34:37 -- common/autotest_common.sh@950 -- # wait 3722806 00:13:24.496 01:34:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:24.496 01:34:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:24.496 01:34:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:24.496 01:34:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:24.496 01:34:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:24.496 01:34:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:24.496 01:34:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:24.496 01:34:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.403 01:34:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:26.403 00:13:26.403 real 0m25.740s 00:13:26.403 user 1m24.330s 00:13:26.403 sys 0m4.143s 00:13:26.403 01:34:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:26.403 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:26.403 ************************************ 00:13:26.403 END TEST nvmf_rpc 00:13:26.403 ************************************ 00:13:26.403 01:34:39 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.403 01:34:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:26.403 01:34:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:26.403 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:26.403 ************************************ 00:13:26.403 START TEST nvmf_invalid 00:13:26.403 ************************************ 00:13:26.403 01:34:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:26.662 * Looking for test storage... 00:13:26.662 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.662 01:34:39 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.662 01:34:39 -- nvmf/common.sh@7 -- # uname -s 00:13:26.662 01:34:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.662 01:34:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.662 01:34:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.662 01:34:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.662 01:34:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:26.662 01:34:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:26.662 01:34:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.662 01:34:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:26.662 01:34:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.662 01:34:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:26.662 01:34:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.662 01:34:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:26.662 01:34:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.662 01:34:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:26.662 01:34:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:26.662 01:34:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.662 01:34:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.662 01:34:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.662 01:34:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.662 01:34:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.662 01:34:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.662 01:34:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.662 01:34:39 -- paths/export.sh@5 -- # export PATH 00:13:26.662 01:34:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.662 01:34:39 -- nvmf/common.sh@46 -- # : 0 00:13:26.662 01:34:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:26.662 01:34:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:26.662 01:34:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:26.662 01:34:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.662 01:34:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.662 01:34:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:26.662 01:34:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:26.662 01:34:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:26.662 01:34:39 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:26.662 01:34:39 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.662 01:34:39 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:26.662 01:34:39 -- target/invalid.sh@14 -- # target=foobar 00:13:26.662 01:34:39 -- target/invalid.sh@16 -- # RANDOM=0 00:13:26.662 01:34:39 -- target/invalid.sh@34 -- # nvmftestinit 00:13:26.662 01:34:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:26.662 01:34:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.662 01:34:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:26.662 01:34:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:26.662 01:34:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:26.662 01:34:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.662 01:34:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.662 01:34:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:26.662 01:34:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:26.662 01:34:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:26.662 01:34:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:26.662 01:34:39 -- common/autotest_common.sh@10 -- # set +x 00:13:28.564 01:34:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:28.564 01:34:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:28.564 01:34:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:28.564 01:34:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:28.564 01:34:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:28.564 01:34:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:28.564 01:34:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:28.564 01:34:41 -- nvmf/common.sh@294 -- # net_devs=() 00:13:28.564 01:34:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:28.564 01:34:41 -- nvmf/common.sh@295 -- # e810=() 00:13:28.564 01:34:41 -- nvmf/common.sh@295 -- # local -ga e810 00:13:28.564 01:34:41 -- nvmf/common.sh@296 -- # x722=() 00:13:28.564 01:34:41 -- nvmf/common.sh@296 -- # local -ga x722 00:13:28.564 01:34:41 -- nvmf/common.sh@297 -- # mlx=() 00:13:28.564 01:34:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:28.564 01:34:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.564 01:34:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:28.564 01:34:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:28.564 01:34:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.564 01:34:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:28.564 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:28.564 01:34:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:28.564 01:34:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:28.564 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:28.564 01:34:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.564 01:34:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.564 01:34:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.564 01:34:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:28.564 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:28.564 01:34:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.564 01:34:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:28.564 01:34:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.564 01:34:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.564 01:34:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:28.564 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:28.564 01:34:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.564 01:34:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:28.564 01:34:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:28.564 01:34:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:28.564 01:34:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.564 01:34:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.564 01:34:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.564 01:34:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:28.564 01:34:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.564 01:34:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.564 01:34:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:28.564 01:34:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.564 01:34:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.564 01:34:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:28.564 01:34:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:28.564 01:34:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.564 01:34:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.564 01:34:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.564 01:34:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.564 01:34:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:28.823 01:34:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.823 01:34:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.823 01:34:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.823 01:34:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:28.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:13:28.823 00:13:28.823 --- 10.0.0.2 ping statistics --- 00:13:28.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.823 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:28.823 01:34:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:13:28.823 00:13:28.823 --- 10.0.0.1 ping statistics --- 00:13:28.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.823 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:13:28.823 01:34:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.823 01:34:41 -- nvmf/common.sh@410 -- # return 0 00:13:28.823 01:34:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.823 01:34:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.823 01:34:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.823 01:34:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.823 01:34:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.823 01:34:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.823 01:34:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.823 01:34:41 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:28.823 01:34:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.823 01:34:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.823 01:34:41 -- common/autotest_common.sh@10 -- # set +x 00:13:28.823 01:34:41 -- nvmf/common.sh@469 -- # nvmfpid=3727445 00:13:28.823 01:34:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.823 01:34:41 -- nvmf/common.sh@470 -- # waitforlisten 3727445 00:13:28.823 01:34:41 -- common/autotest_common.sh@819 -- # '[' -z 3727445 ']' 00:13:28.823 01:34:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.823 01:34:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.823 01:34:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.823 01:34:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.823 01:34:41 -- common/autotest_common.sh@10 -- # set +x 00:13:28.823 [2024-07-23 01:34:41.788486] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:28.823 [2024-07-23 01:34:41.788581] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.823 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.823 [2024-07-23 01:34:41.852473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.082 [2024-07-23 01:34:41.941401] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:29.082 [2024-07-23 01:34:41.941550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.082 [2024-07-23 01:34:41.941567] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.082 [2024-07-23 01:34:41.941580] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.082 [2024-07-23 01:34:41.941660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.082 [2024-07-23 01:34:41.941686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.082 [2024-07-23 01:34:41.941748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.082 [2024-07-23 01:34:41.941750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.647 01:34:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:29.647 01:34:42 -- common/autotest_common.sh@852 -- # return 0 00:13:29.647 01:34:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.647 01:34:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:29.647 01:34:42 -- common/autotest_common.sh@10 -- # set +x 00:13:29.905 01:34:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.905 01:34:42 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.905 01:34:42 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16076 00:13:29.905 [2024-07-23 01:34:42.985862] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:30.163 01:34:43 -- target/invalid.sh@40 -- # out='request: 00:13:30.163 { 00:13:30.163 "nqn": "nqn.2016-06.io.spdk:cnode16076", 00:13:30.163 "tgt_name": "foobar", 00:13:30.163 "method": "nvmf_create_subsystem", 00:13:30.163 "req_id": 1 00:13:30.163 } 00:13:30.163 Got JSON-RPC error response 00:13:30.163 response: 00:13:30.163 { 00:13:30.163 "code": -32603, 00:13:30.163 "message": "Unable to find target foobar" 00:13:30.163 }' 00:13:30.163 01:34:43 -- target/invalid.sh@41 -- # [[ request: 00:13:30.163 { 00:13:30.163 "nqn": "nqn.2016-06.io.spdk:cnode16076", 00:13:30.163 "tgt_name": "foobar", 00:13:30.163 "method": "nvmf_create_subsystem", 00:13:30.163 "req_id": 1 00:13:30.163 } 00:13:30.163 Got JSON-RPC error response 00:13:30.163 response: 00:13:30.163 { 00:13:30.163 "code": -32603, 00:13:30.163 "message": "Unable to find target foobar" 00:13:30.163 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:30.163 01:34:43 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:30.163 01:34:43 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11479 00:13:30.163 [2024-07-23 01:34:43.214628] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11479: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:30.163 01:34:43 -- target/invalid.sh@45 -- # out='request: 00:13:30.163 { 00:13:30.163 "nqn": "nqn.2016-06.io.spdk:cnode11479", 00:13:30.163 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.163 "method": "nvmf_create_subsystem", 00:13:30.163 "req_id": 1 00:13:30.163 } 00:13:30.163 Got JSON-RPC error response 00:13:30.163 response: 00:13:30.163 { 00:13:30.163 "code": -32602, 00:13:30.163 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.163 }' 00:13:30.163 01:34:43 -- target/invalid.sh@46 -- # [[ request: 00:13:30.163 { 00:13:30.163 "nqn": "nqn.2016-06.io.spdk:cnode11479", 00:13:30.163 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:30.163 "method": "nvmf_create_subsystem", 00:13:30.163 "req_id": 1 00:13:30.163 } 00:13:30.163 Got JSON-RPC error response 00:13:30.163 response: 00:13:30.163 { 00:13:30.163 "code": -32602, 00:13:30.163 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:30.163 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.163 01:34:43 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:30.163 01:34:43 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode28065 00:13:30.420 [2024-07-23 01:34:43.451384] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28065: invalid model number 'SPDK_Controller' 00:13:30.420 01:34:43 -- target/invalid.sh@50 -- # out='request: 00:13:30.420 { 00:13:30.420 "nqn": "nqn.2016-06.io.spdk:cnode28065", 00:13:30.420 "model_number": "SPDK_Controller\u001f", 00:13:30.420 "method": "nvmf_create_subsystem", 00:13:30.420 "req_id": 1 00:13:30.420 } 00:13:30.420 Got JSON-RPC error response 00:13:30.420 response: 00:13:30.420 { 00:13:30.420 "code": -32602, 00:13:30.420 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.420 }' 00:13:30.420 01:34:43 -- target/invalid.sh@51 -- # [[ request: 00:13:30.420 { 00:13:30.420 "nqn": "nqn.2016-06.io.spdk:cnode28065", 00:13:30.420 "model_number": "SPDK_Controller\u001f", 00:13:30.420 "method": "nvmf_create_subsystem", 00:13:30.420 "req_id": 1 00:13:30.420 } 00:13:30.420 Got JSON-RPC error response 00:13:30.420 response: 00:13:30.420 { 00:13:30.420 "code": -32602, 00:13:30.420 "message": "Invalid MN SPDK_Controller\u001f" 00:13:30.420 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:30.420 01:34:43 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:30.420 01:34:43 -- target/invalid.sh@19 -- # local length=21 ll 00:13:30.420 01:34:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.420 01:34:43 -- target/invalid.sh@21 -- # local chars 00:13:30.420 01:34:43 -- target/invalid.sh@22 -- # local string 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 49 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=1 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 69 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=E 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 72 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=H 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 118 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=v 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 124 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+='|' 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 121 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=y 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 62 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+='>' 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 104 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=h 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 55 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+=7 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 96 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+='`' 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 96 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # string+='`' 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.420 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.420 01:34:43 -- target/invalid.sh@25 -- # printf %x 115 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=s 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 110 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=n 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 89 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=Y 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 127 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 45 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=- 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 33 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+='!' 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 45 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=- 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 68 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=D 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 77 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=M 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # printf %x 70 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.678 01:34:43 -- target/invalid.sh@25 -- # string+=F 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.678 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.678 01:34:43 -- target/invalid.sh@28 -- # [[ 1 == \- ]] 00:13:30.678 01:34:43 -- target/invalid.sh@31 -- # echo '1EHv|y>h7``snY-!-DMF' 00:13:30.678 01:34:43 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '1EHv|y>h7``snY-!-DMF' nqn.2016-06.io.spdk:cnode27365 00:13:30.937 [2024-07-23 01:34:43.780501] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27365: invalid serial number '1EHv|y>h7``snY-!-DMF' 00:13:30.937 01:34:43 -- target/invalid.sh@54 -- # out='request: 00:13:30.937 { 00:13:30.937 "nqn": "nqn.2016-06.io.spdk:cnode27365", 00:13:30.937 "serial_number": "1EHv|y>h7``snY\u007f-!-DMF", 00:13:30.937 "method": "nvmf_create_subsystem", 00:13:30.937 "req_id": 1 00:13:30.937 } 00:13:30.937 Got JSON-RPC error response 00:13:30.937 response: 00:13:30.937 { 00:13:30.937 "code": -32602, 00:13:30.937 "message": "Invalid SN 1EHv|y>h7``snY\u007f-!-DMF" 00:13:30.937 }' 00:13:30.937 01:34:43 -- target/invalid.sh@55 -- # [[ request: 00:13:30.937 { 00:13:30.937 "nqn": "nqn.2016-06.io.spdk:cnode27365", 00:13:30.937 "serial_number": "1EHv|y>h7``snY\u007f-!-DMF", 00:13:30.937 "method": "nvmf_create_subsystem", 00:13:30.937 "req_id": 1 00:13:30.937 } 00:13:30.937 Got JSON-RPC error response 00:13:30.937 response: 00:13:30.937 { 00:13:30.937 "code": -32602, 00:13:30.937 "message": "Invalid SN 1EHv|y>h7``snY\u007f-!-DMF" 00:13:30.937 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:30.937 01:34:43 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:30.937 01:34:43 -- target/invalid.sh@19 -- # local length=41 ll 00:13:30.937 01:34:43 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:30.937 01:34:43 -- target/invalid.sh@21 -- # local chars 00:13:30.937 01:34:43 -- target/invalid.sh@22 -- # local string 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 72 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=H 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 39 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=\' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 42 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+='*' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 81 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=Q 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 97 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=a 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 75 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=K 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 70 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=F 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 81 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=Q 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 96 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+='`' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 37 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=% 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 40 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+='(' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 65 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=A 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 107 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=k 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 83 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=S 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 47 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=/ 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 106 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=j 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 127 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 95 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=_ 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 45 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=- 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 75 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=K 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 120 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=x 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 102 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+=f 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # printf %x 33 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:30.937 01:34:43 -- target/invalid.sh@25 -- # string+='!' 00:13:30.937 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 44 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=, 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 120 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=x 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 114 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=r 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 121 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=y 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 123 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+='{' 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 95 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=_ 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 84 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=T 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 107 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=k 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 109 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=m 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 54 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=6 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 88 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=X 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 71 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=G 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 126 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+='~' 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 103 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=g 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 67 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=C 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 101 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=e 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 74 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+=J 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # printf %x 35 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:30.938 01:34:43 -- target/invalid.sh@25 -- # string+='#' 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:30.938 01:34:43 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:30.938 01:34:43 -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:30.938 01:34:43 -- target/invalid.sh@31 -- # echo 'H'\''*QaKFQ`%(AkS/j_-Kxf!,xry{_Tkm6XG~gCeJ#' 00:13:30.938 01:34:43 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'H'\''*QaKFQ`%(AkS/j_-Kxf!,xry{_Tkm6XG~gCeJ#' nqn.2016-06.io.spdk:cnode27142 00:13:31.196 [2024-07-23 01:34:44.177846] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27142: invalid model number 'H'*QaKFQ`%(AkS/j_-Kxf!,xry{_Tkm6XG~gCeJ#' 00:13:31.196 01:34:44 -- target/invalid.sh@58 -- # out='request: 00:13:31.196 { 00:13:31.196 "nqn": "nqn.2016-06.io.spdk:cnode27142", 00:13:31.196 "model_number": "H'\''*QaKFQ`%(AkS/j\u007f_-Kxf!,xry{_Tkm6XG~gCeJ#", 00:13:31.196 "method": "nvmf_create_subsystem", 00:13:31.196 "req_id": 1 00:13:31.196 } 00:13:31.196 Got JSON-RPC error response 00:13:31.196 response: 00:13:31.196 { 00:13:31.196 "code": -32602, 00:13:31.196 "message": "Invalid MN H'\''*QaKFQ`%(AkS/j\u007f_-Kxf!,xry{_Tkm6XG~gCeJ#" 00:13:31.196 }' 00:13:31.196 01:34:44 -- target/invalid.sh@59 -- # [[ request: 00:13:31.196 { 00:13:31.196 "nqn": "nqn.2016-06.io.spdk:cnode27142", 00:13:31.196 "model_number": "H'*QaKFQ`%(AkS/j\u007f_-Kxf!,xry{_Tkm6XG~gCeJ#", 00:13:31.196 "method": "nvmf_create_subsystem", 00:13:31.196 "req_id": 1 00:13:31.196 } 00:13:31.196 Got JSON-RPC error response 00:13:31.196 response: 00:13:31.196 { 00:13:31.196 "code": -32602, 00:13:31.196 "message": "Invalid MN H'*QaKFQ`%(AkS/j\u007f_-Kxf!,xry{_Tkm6XG~gCeJ#" 00:13:31.196 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:31.196 01:34:44 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:31.454 [2024-07-23 01:34:44.410698] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:31.454 01:34:44 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:31.711 01:34:44 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:31.711 01:34:44 -- target/invalid.sh@67 -- # echo '' 00:13:31.712 01:34:44 -- target/invalid.sh@67 -- # head -n 1 00:13:31.712 01:34:44 -- target/invalid.sh@67 -- # IP= 00:13:31.712 01:34:44 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:31.969 [2024-07-23 01:34:44.900348] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:31.969 01:34:44 -- target/invalid.sh@69 -- # out='request: 00:13:31.969 { 00:13:31.969 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.969 "listen_address": { 00:13:31.969 "trtype": "tcp", 00:13:31.969 "traddr": "", 00:13:31.969 "trsvcid": "4421" 00:13:31.969 }, 00:13:31.969 "method": "nvmf_subsystem_remove_listener", 00:13:31.969 "req_id": 1 00:13:31.969 } 00:13:31.969 Got JSON-RPC error response 00:13:31.969 response: 00:13:31.969 { 00:13:31.969 "code": -32602, 00:13:31.969 "message": "Invalid parameters" 00:13:31.969 }' 00:13:31.969 01:34:44 -- target/invalid.sh@70 -- # [[ request: 00:13:31.969 { 00:13:31.969 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:31.969 "listen_address": { 00:13:31.969 "trtype": "tcp", 00:13:31.969 "traddr": "", 00:13:31.969 "trsvcid": "4421" 00:13:31.969 }, 00:13:31.969 "method": "nvmf_subsystem_remove_listener", 00:13:31.969 "req_id": 1 00:13:31.969 } 00:13:31.969 Got JSON-RPC error response 00:13:31.969 response: 00:13:31.969 { 00:13:31.969 "code": -32602, 00:13:31.969 "message": "Invalid parameters" 00:13:31.969 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:31.969 01:34:44 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4151 -i 0 00:13:32.227 [2024-07-23 01:34:45.157180] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4151: invalid cntlid range [0-65519] 00:13:32.227 01:34:45 -- target/invalid.sh@73 -- # out='request: 00:13:32.227 { 00:13:32.227 "nqn": "nqn.2016-06.io.spdk:cnode4151", 00:13:32.227 "min_cntlid": 0, 00:13:32.227 "method": "nvmf_create_subsystem", 00:13:32.227 "req_id": 1 00:13:32.227 } 00:13:32.227 Got JSON-RPC error response 00:13:32.227 response: 00:13:32.227 { 00:13:32.227 "code": -32602, 00:13:32.227 "message": "Invalid cntlid range [0-65519]" 00:13:32.227 }' 00:13:32.227 01:34:45 -- target/invalid.sh@74 -- # [[ request: 00:13:32.227 { 00:13:32.227 "nqn": "nqn.2016-06.io.spdk:cnode4151", 00:13:32.227 "min_cntlid": 0, 00:13:32.227 "method": "nvmf_create_subsystem", 00:13:32.227 "req_id": 1 00:13:32.227 } 00:13:32.227 Got JSON-RPC error response 00:13:32.227 response: 00:13:32.227 { 00:13:32.227 "code": -32602, 00:13:32.227 "message": "Invalid cntlid range [0-65519]" 00:13:32.227 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.227 01:34:45 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32549 -i 65520 00:13:32.485 [2024-07-23 01:34:45.393980] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32549: invalid cntlid range [65520-65519] 00:13:32.485 01:34:45 -- target/invalid.sh@75 -- # out='request: 00:13:32.485 { 00:13:32.485 "nqn": "nqn.2016-06.io.spdk:cnode32549", 00:13:32.485 "min_cntlid": 65520, 00:13:32.485 "method": "nvmf_create_subsystem", 00:13:32.485 "req_id": 1 00:13:32.485 } 00:13:32.485 Got JSON-RPC error response 00:13:32.485 response: 00:13:32.485 { 00:13:32.485 "code": -32602, 00:13:32.485 "message": "Invalid cntlid range [65520-65519]" 00:13:32.485 }' 00:13:32.485 01:34:45 -- target/invalid.sh@76 -- # [[ request: 00:13:32.485 { 00:13:32.485 "nqn": "nqn.2016-06.io.spdk:cnode32549", 00:13:32.485 "min_cntlid": 65520, 00:13:32.485 "method": "nvmf_create_subsystem", 00:13:32.485 "req_id": 1 00:13:32.485 } 00:13:32.485 Got JSON-RPC error response 00:13:32.485 response: 00:13:32.485 { 00:13:32.485 "code": -32602, 00:13:32.485 "message": "Invalid cntlid range [65520-65519]" 00:13:32.485 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.485 01:34:45 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21432 -I 0 00:13:32.743 [2024-07-23 01:34:45.626792] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21432: invalid cntlid range [1-0] 00:13:32.743 01:34:45 -- target/invalid.sh@77 -- # out='request: 00:13:32.743 { 00:13:32.743 "nqn": "nqn.2016-06.io.spdk:cnode21432", 00:13:32.743 "max_cntlid": 0, 00:13:32.743 "method": "nvmf_create_subsystem", 00:13:32.743 "req_id": 1 00:13:32.743 } 00:13:32.743 Got JSON-RPC error response 00:13:32.743 response: 00:13:32.743 { 00:13:32.743 "code": -32602, 00:13:32.743 "message": "Invalid cntlid range [1-0]" 00:13:32.743 }' 00:13:32.743 01:34:45 -- target/invalid.sh@78 -- # [[ request: 00:13:32.743 { 00:13:32.743 "nqn": "nqn.2016-06.io.spdk:cnode21432", 00:13:32.743 "max_cntlid": 0, 00:13:32.743 "method": "nvmf_create_subsystem", 00:13:32.743 "req_id": 1 00:13:32.743 } 00:13:32.743 Got JSON-RPC error response 00:13:32.743 response: 00:13:32.743 { 00:13:32.743 "code": -32602, 00:13:32.743 "message": "Invalid cntlid range [1-0]" 00:13:32.743 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:32.743 01:34:45 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15463 -I 65520 00:13:33.001 [2024-07-23 01:34:45.855579] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15463: invalid cntlid range [1-65520] 00:13:33.001 01:34:45 -- target/invalid.sh@79 -- # out='request: 00:13:33.001 { 00:13:33.001 "nqn": "nqn.2016-06.io.spdk:cnode15463", 00:13:33.001 "max_cntlid": 65520, 00:13:33.001 "method": "nvmf_create_subsystem", 00:13:33.001 "req_id": 1 00:13:33.001 } 00:13:33.001 Got JSON-RPC error response 00:13:33.001 response: 00:13:33.001 { 00:13:33.001 "code": -32602, 00:13:33.001 "message": "Invalid cntlid range [1-65520]" 00:13:33.001 }' 00:13:33.001 01:34:45 -- target/invalid.sh@80 -- # [[ request: 00:13:33.001 { 00:13:33.001 "nqn": "nqn.2016-06.io.spdk:cnode15463", 00:13:33.001 "max_cntlid": 65520, 00:13:33.001 "method": "nvmf_create_subsystem", 00:13:33.001 "req_id": 1 00:13:33.001 } 00:13:33.001 Got JSON-RPC error response 00:13:33.001 response: 00:13:33.001 { 00:13:33.001 "code": -32602, 00:13:33.001 "message": "Invalid cntlid range [1-65520]" 00:13:33.001 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:33.001 01:34:45 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20496 -i 6 -I 5 00:13:33.001 [2024-07-23 01:34:46.100405] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20496: invalid cntlid range [6-5] 00:13:33.259 01:34:46 -- target/invalid.sh@83 -- # out='request: 00:13:33.259 { 00:13:33.259 "nqn": "nqn.2016-06.io.spdk:cnode20496", 00:13:33.259 "min_cntlid": 6, 00:13:33.259 "max_cntlid": 5, 00:13:33.259 "method": "nvmf_create_subsystem", 00:13:33.259 "req_id": 1 00:13:33.259 } 00:13:33.259 Got JSON-RPC error response 00:13:33.259 response: 00:13:33.259 { 00:13:33.259 "code": -32602, 00:13:33.259 "message": "Invalid cntlid range [6-5]" 00:13:33.259 }' 00:13:33.259 01:34:46 -- target/invalid.sh@84 -- # [[ request: 00:13:33.259 { 00:13:33.259 "nqn": "nqn.2016-06.io.spdk:cnode20496", 00:13:33.259 "min_cntlid": 6, 00:13:33.259 "max_cntlid": 5, 00:13:33.259 "method": "nvmf_create_subsystem", 00:13:33.259 "req_id": 1 00:13:33.259 } 00:13:33.259 Got JSON-RPC error response 00:13:33.259 response: 00:13:33.259 { 00:13:33.259 "code": -32602, 00:13:33.259 "message": "Invalid cntlid range [6-5]" 00:13:33.259 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:33.259 01:34:46 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:33.259 01:34:46 -- target/invalid.sh@87 -- # out='request: 00:13:33.259 { 00:13:33.259 "name": "foobar", 00:13:33.259 "method": "nvmf_delete_target", 00:13:33.259 "req_id": 1 00:13:33.259 } 00:13:33.259 Got JSON-RPC error response 00:13:33.259 response: 00:13:33.259 { 00:13:33.259 "code": -32602, 00:13:33.259 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:33.259 }' 00:13:33.259 01:34:46 -- target/invalid.sh@88 -- # [[ request: 00:13:33.259 { 00:13:33.259 "name": "foobar", 00:13:33.259 "method": "nvmf_delete_target", 00:13:33.259 "req_id": 1 00:13:33.259 } 00:13:33.259 Got JSON-RPC error response 00:13:33.259 response: 00:13:33.259 { 00:13:33.259 "code": -32602, 00:13:33.259 "message": "The specified target doesn't exist, cannot delete it." 00:13:33.259 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:33.259 01:34:46 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:33.259 01:34:46 -- target/invalid.sh@91 -- # nvmftestfini 00:13:33.259 01:34:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:33.259 01:34:46 -- nvmf/common.sh@116 -- # sync 00:13:33.259 01:34:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:33.259 01:34:46 -- nvmf/common.sh@119 -- # set +e 00:13:33.259 01:34:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:33.259 01:34:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:33.259 rmmod nvme_tcp 00:13:33.259 rmmod nvme_fabrics 00:13:33.259 rmmod nvme_keyring 00:13:33.259 01:34:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:33.259 01:34:46 -- nvmf/common.sh@123 -- # set -e 00:13:33.259 01:34:46 -- nvmf/common.sh@124 -- # return 0 00:13:33.259 01:34:46 -- nvmf/common.sh@477 -- # '[' -n 3727445 ']' 00:13:33.259 01:34:46 -- nvmf/common.sh@478 -- # killprocess 3727445 00:13:33.259 01:34:46 -- common/autotest_common.sh@926 -- # '[' -z 3727445 ']' 00:13:33.259 01:34:46 -- common/autotest_common.sh@930 -- # kill -0 3727445 00:13:33.259 01:34:46 -- common/autotest_common.sh@931 -- # uname 00:13:33.259 01:34:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:33.259 01:34:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3727445 00:13:33.259 01:34:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:33.259 01:34:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:33.259 01:34:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3727445' 00:13:33.259 killing process with pid 3727445 00:13:33.259 01:34:46 -- common/autotest_common.sh@945 -- # kill 3727445 00:13:33.260 01:34:46 -- common/autotest_common.sh@950 -- # wait 3727445 00:13:33.518 01:34:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:33.518 01:34:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:33.518 01:34:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:33.518 01:34:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:33.518 01:34:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:33.518 01:34:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:33.518 01:34:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:33.518 01:34:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.056 01:34:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:36.056 00:13:36.056 real 0m9.146s 00:13:36.056 user 0m22.073s 00:13:36.056 sys 0m2.508s 00:13:36.056 01:34:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.056 01:34:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.056 ************************************ 00:13:36.056 END TEST nvmf_invalid 00:13:36.056 ************************************ 00:13:36.056 01:34:48 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:36.056 01:34:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:36.056 01:34:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:36.056 01:34:48 -- common/autotest_common.sh@10 -- # set +x 00:13:36.056 ************************************ 00:13:36.056 START TEST nvmf_abort 00:13:36.056 ************************************ 00:13:36.056 01:34:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:36.056 * Looking for test storage... 00:13:36.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.056 01:34:48 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.056 01:34:48 -- nvmf/common.sh@7 -- # uname -s 00:13:36.056 01:34:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.056 01:34:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.056 01:34:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.056 01:34:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.056 01:34:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.056 01:34:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.056 01:34:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.056 01:34:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.056 01:34:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.056 01:34:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.056 01:34:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.056 01:34:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:36.056 01:34:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.056 01:34:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.056 01:34:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.056 01:34:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.056 01:34:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.056 01:34:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.056 01:34:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.056 01:34:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.056 01:34:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.056 01:34:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.056 01:34:48 -- paths/export.sh@5 -- # export PATH 00:13:36.056 01:34:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.056 01:34:48 -- nvmf/common.sh@46 -- # : 0 00:13:36.056 01:34:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:36.056 01:34:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:36.056 01:34:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:36.056 01:34:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.056 01:34:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.056 01:34:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:36.056 01:34:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:36.056 01:34:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:36.056 01:34:48 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:36.056 01:34:48 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:36.056 01:34:48 -- target/abort.sh@14 -- # nvmftestinit 00:13:36.056 01:34:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:36.056 01:34:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.056 01:34:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:36.056 01:34:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:36.056 01:34:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:36.056 01:34:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.056 01:34:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.056 01:34:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.056 01:34:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:36.056 01:34:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:36.056 01:34:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:36.056 01:34:48 -- common/autotest_common.sh@10 -- # set +x 00:13:37.958 01:34:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:37.958 01:34:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:37.958 01:34:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:37.958 01:34:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:37.958 01:34:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:37.958 01:34:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:37.958 01:34:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:37.958 01:34:50 -- nvmf/common.sh@294 -- # net_devs=() 00:13:37.958 01:34:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:37.958 01:34:50 -- nvmf/common.sh@295 -- # e810=() 00:13:37.958 01:34:50 -- nvmf/common.sh@295 -- # local -ga e810 00:13:37.958 01:34:50 -- nvmf/common.sh@296 -- # x722=() 00:13:37.958 01:34:50 -- nvmf/common.sh@296 -- # local -ga x722 00:13:37.958 01:34:50 -- nvmf/common.sh@297 -- # mlx=() 00:13:37.958 01:34:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:37.958 01:34:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.958 01:34:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.959 01:34:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:37.959 01:34:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:37.959 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:37.959 01:34:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:37.959 01:34:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:37.959 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:37.959 01:34:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:37.959 01:34:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.959 01:34:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.959 01:34:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:37.959 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:37.959 01:34:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:37.959 01:34:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.959 01:34:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.959 01:34:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:37.959 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:37.959 01:34:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:37.959 01:34:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:37.959 01:34:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.959 01:34:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.959 01:34:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:37.959 01:34:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.959 01:34:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.959 01:34:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:37.959 01:34:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.959 01:34:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.959 01:34:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:37.959 01:34:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:37.959 01:34:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.959 01:34:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.959 01:34:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.959 01:34:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.959 01:34:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:37.959 01:34:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.959 01:34:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.959 01:34:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.959 01:34:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:37.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:13:37.959 00:13:37.959 --- 10.0.0.2 ping statistics --- 00:13:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.959 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:37.959 01:34:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:37.959 00:13:37.959 --- 10.0.0.1 ping statistics --- 00:13:37.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.959 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:37.959 01:34:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.959 01:34:50 -- nvmf/common.sh@410 -- # return 0 00:13:37.959 01:34:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:37.959 01:34:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.959 01:34:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:37.959 01:34:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.959 01:34:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:37.959 01:34:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:37.959 01:34:50 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:37.959 01:34:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:37.959 01:34:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:37.959 01:34:50 -- common/autotest_common.sh@10 -- # set +x 00:13:37.959 01:34:50 -- nvmf/common.sh@469 -- # nvmfpid=3730177 00:13:37.959 01:34:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:37.959 01:34:50 -- nvmf/common.sh@470 -- # waitforlisten 3730177 00:13:37.959 01:34:50 -- common/autotest_common.sh@819 -- # '[' -z 3730177 ']' 00:13:37.959 01:34:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.959 01:34:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:37.959 01:34:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.959 01:34:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:37.959 01:34:50 -- common/autotest_common.sh@10 -- # set +x 00:13:37.959 [2024-07-23 01:34:50.826241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:37.959 [2024-07-23 01:34:50.826321] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.959 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.959 [2024-07-23 01:34:50.901055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.959 [2024-07-23 01:34:50.992638] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:37.959 [2024-07-23 01:34:50.992823] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.959 [2024-07-23 01:34:50.992843] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.959 [2024-07-23 01:34:50.992857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.959 [2024-07-23 01:34:50.992954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.959 [2024-07-23 01:34:50.993008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.959 [2024-07-23 01:34:50.993012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.892 01:34:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:38.892 01:34:51 -- common/autotest_common.sh@852 -- # return 0 00:13:38.892 01:34:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:38.892 01:34:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 01:34:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:38.892 01:34:51 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 [2024-07-23 01:34:51.804001] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 Malloc0 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 Delay0 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 [2024-07-23 01:34:51.875568] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:38.892 01:34:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.892 01:34:51 -- common/autotest_common.sh@10 -- # set +x 00:13:38.892 01:34:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.892 01:34:51 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:38.892 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.892 [2024-07-23 01:34:51.982115] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:41.420 Initializing NVMe Controllers 00:13:41.420 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:41.420 controller IO queue size 128 less than required 00:13:41.420 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:41.420 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:41.420 Initialization complete. Launching workers. 00:13:41.420 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30070 00:13:41.420 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30131, failed to submit 62 00:13:41.420 success 30070, unsuccess 61, failed 0 00:13:41.420 01:34:54 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:41.420 01:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.420 01:34:54 -- common/autotest_common.sh@10 -- # set +x 00:13:41.420 01:34:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.420 01:34:54 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:41.420 01:34:54 -- target/abort.sh@38 -- # nvmftestfini 00:13:41.420 01:34:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:41.420 01:34:54 -- nvmf/common.sh@116 -- # sync 00:13:41.420 01:34:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:41.420 01:34:54 -- nvmf/common.sh@119 -- # set +e 00:13:41.420 01:34:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:41.420 01:34:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:41.420 rmmod nvme_tcp 00:13:41.420 rmmod nvme_fabrics 00:13:41.420 rmmod nvme_keyring 00:13:41.420 01:34:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:41.420 01:34:54 -- nvmf/common.sh@123 -- # set -e 00:13:41.420 01:34:54 -- nvmf/common.sh@124 -- # return 0 00:13:41.420 01:34:54 -- nvmf/common.sh@477 -- # '[' -n 3730177 ']' 00:13:41.420 01:34:54 -- nvmf/common.sh@478 -- # killprocess 3730177 00:13:41.420 01:34:54 -- common/autotest_common.sh@926 -- # '[' -z 3730177 ']' 00:13:41.420 01:34:54 -- common/autotest_common.sh@930 -- # kill -0 3730177 00:13:41.420 01:34:54 -- common/autotest_common.sh@931 -- # uname 00:13:41.420 01:34:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:41.420 01:34:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3730177 00:13:41.420 01:34:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:41.420 01:34:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:41.420 01:34:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3730177' 00:13:41.420 killing process with pid 3730177 00:13:41.420 01:34:54 -- common/autotest_common.sh@945 -- # kill 3730177 00:13:41.420 01:34:54 -- common/autotest_common.sh@950 -- # wait 3730177 00:13:41.420 01:34:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:41.420 01:34:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:41.420 01:34:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:41.420 01:34:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:41.420 01:34:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:41.420 01:34:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.420 01:34:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.420 01:34:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.986 01:34:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:43.986 00:13:43.986 real 0m7.827s 00:13:43.986 user 0m12.674s 00:13:43.986 sys 0m2.493s 00:13:43.986 01:34:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.986 01:34:56 -- common/autotest_common.sh@10 -- # set +x 00:13:43.986 ************************************ 00:13:43.986 END TEST nvmf_abort 00:13:43.986 ************************************ 00:13:43.986 01:34:56 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.986 01:34:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:43.986 01:34:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:43.986 01:34:56 -- common/autotest_common.sh@10 -- # set +x 00:13:43.986 ************************************ 00:13:43.986 START TEST nvmf_ns_hotplug_stress 00:13:43.986 ************************************ 00:13:43.986 01:34:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:43.986 * Looking for test storage... 00:13:43.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.986 01:34:56 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.986 01:34:56 -- nvmf/common.sh@7 -- # uname -s 00:13:43.986 01:34:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.986 01:34:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.986 01:34:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.986 01:34:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.986 01:34:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.986 01:34:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.986 01:34:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.986 01:34:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.986 01:34:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.986 01:34:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.986 01:34:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.986 01:34:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.986 01:34:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.986 01:34:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.986 01:34:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.986 01:34:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.986 01:34:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.986 01:34:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.986 01:34:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.986 01:34:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.986 01:34:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.986 01:34:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.986 01:34:56 -- paths/export.sh@5 -- # export PATH 00:13:43.986 01:34:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.986 01:34:56 -- nvmf/common.sh@46 -- # : 0 00:13:43.986 01:34:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:43.986 01:34:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:43.986 01:34:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:43.986 01:34:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.986 01:34:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.986 01:34:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:43.986 01:34:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:43.986 01:34:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:43.986 01:34:56 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.986 01:34:56 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:43.986 01:34:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:43.986 01:34:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:43.986 01:34:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:43.986 01:34:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:43.986 01:34:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:43.986 01:34:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.986 01:34:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.986 01:34:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:43.986 01:34:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:43.986 01:34:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:43.986 01:34:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:43.986 01:34:56 -- common/autotest_common.sh@10 -- # set +x 00:13:45.889 01:34:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:45.889 01:34:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:45.889 01:34:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:45.889 01:34:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:45.889 01:34:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:45.889 01:34:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:45.889 01:34:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:45.889 01:34:58 -- nvmf/common.sh@294 -- # net_devs=() 00:13:45.889 01:34:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:45.889 01:34:58 -- nvmf/common.sh@295 -- # e810=() 00:13:45.889 01:34:58 -- nvmf/common.sh@295 -- # local -ga e810 00:13:45.889 01:34:58 -- nvmf/common.sh@296 -- # x722=() 00:13:45.889 01:34:58 -- nvmf/common.sh@296 -- # local -ga x722 00:13:45.889 01:34:58 -- nvmf/common.sh@297 -- # mlx=() 00:13:45.889 01:34:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:45.889 01:34:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.889 01:34:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:45.889 01:34:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:45.889 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:45.889 01:34:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:45.889 01:34:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:45.889 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:45.889 01:34:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:45.889 01:34:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.889 01:34:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.889 01:34:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:45.889 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:45.889 01:34:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:45.889 01:34:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.889 01:34:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.889 01:34:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:45.889 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:45.889 01:34:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:45.889 01:34:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:45.889 01:34:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.889 01:34:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.889 01:34:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:45.889 01:34:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.889 01:34:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.889 01:34:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:45.889 01:34:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.889 01:34:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.889 01:34:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:45.889 01:34:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:45.889 01:34:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.889 01:34:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.889 01:34:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.889 01:34:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.889 01:34:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:45.889 01:34:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.889 01:34:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.889 01:34:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.889 01:34:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:45.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:13:45.889 00:13:45.889 --- 10.0.0.2 ping statistics --- 00:13:45.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.889 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:45.889 01:34:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:13:45.889 00:13:45.889 --- 10.0.0.1 ping statistics --- 00:13:45.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.889 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:13:45.889 01:34:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.889 01:34:58 -- nvmf/common.sh@410 -- # return 0 00:13:45.889 01:34:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:45.889 01:34:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.889 01:34:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:45.889 01:34:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.889 01:34:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:45.889 01:34:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:45.889 01:34:58 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:45.889 01:34:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:45.889 01:34:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:45.889 01:34:58 -- common/autotest_common.sh@10 -- # set +x 00:13:45.890 01:34:58 -- nvmf/common.sh@469 -- # nvmfpid=3732562 00:13:45.890 01:34:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.890 01:34:58 -- nvmf/common.sh@470 -- # waitforlisten 3732562 00:13:45.890 01:34:58 -- common/autotest_common.sh@819 -- # '[' -z 3732562 ']' 00:13:45.890 01:34:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.890 01:34:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:45.890 01:34:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.890 01:34:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:45.890 01:34:58 -- common/autotest_common.sh@10 -- # set +x 00:13:45.890 [2024-07-23 01:34:58.779271] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:45.890 [2024-07-23 01:34:58.779371] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.890 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.890 [2024-07-23 01:34:58.850178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.890 [2024-07-23 01:34:58.941587] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:45.890 [2024-07-23 01:34:58.941776] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.890 [2024-07-23 01:34:58.941797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.890 [2024-07-23 01:34:58.941812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.890 [2024-07-23 01:34:58.941916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.890 [2024-07-23 01:34:58.941975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.890 [2024-07-23 01:34:58.941978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.824 01:34:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:46.824 01:34:59 -- common/autotest_common.sh@852 -- # return 0 00:13:46.824 01:34:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:46.824 01:34:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:46.824 01:34:59 -- common/autotest_common.sh@10 -- # set +x 00:13:46.824 01:34:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.824 01:34:59 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:46.824 01:34:59 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:47.082 [2024-07-23 01:34:59.951039] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:47.082 01:34:59 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:47.339 01:35:00 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:47.339 [2024-07-23 01:35:00.433721] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:47.597 01:35:00 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:47.855 01:35:00 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:47.855 Malloc0 00:13:48.112 01:35:00 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:48.112 Delay0 00:13:48.112 01:35:01 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.369 01:35:01 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:48.627 NULL1 00:13:48.627 01:35:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.885 01:35:01 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3732891 00:13:48.885 01:35:01 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:48.885 01:35:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:48.885 01:35:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.885 EAL: No free 2048 kB hugepages reported on node 1 00:13:50.258 Read completed with error (sct=0, sc=11) 00:13:50.258 01:35:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:50.258 01:35:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:50.258 01:35:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:50.516 true 00:13:50.516 01:35:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:50.516 01:35:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.449 01:35:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.449 01:35:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:51.449 01:35:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:51.706 true 00:13:51.706 01:35:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:51.706 01:35:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.964 01:35:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.221 01:35:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:52.221 01:35:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:52.480 true 00:13:52.480 01:35:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:52.480 01:35:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.413 01:35:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.670 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.670 01:35:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:53.670 01:35:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:53.928 true 00:13:53.928 01:35:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:53.928 01:35:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.186 01:35:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.443 01:35:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:54.443 01:35:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:54.702 true 00:13:54.702 01:35:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:54.702 01:35:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.635 01:35:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.635 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.893 01:35:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:55.893 01:35:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:56.151 true 00:13:56.151 01:35:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:56.151 01:35:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.409 01:35:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.667 01:35:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:56.667 01:35:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:56.667 true 00:13:56.925 01:35:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:56.925 01:35:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.859 01:35:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.116 01:35:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:58.116 01:35:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:58.374 true 00:13:58.374 01:35:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:58.374 01:35:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.633 01:35:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.916 01:35:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:58.916 01:35:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:58.916 true 00:13:58.916 01:35:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:13:58.916 01:35:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.850 01:35:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.107 01:35:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:00.107 01:35:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:00.364 true 00:14:00.364 01:35:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:00.364 01:35:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.621 01:35:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.878 01:35:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:00.878 01:35:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:01.135 true 00:14:01.135 01:35:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:01.135 01:35:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.067 01:35:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:02.324 01:35:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:02.324 01:35:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:02.582 true 00:14:02.582 01:35:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:02.582 01:35:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.839 01:35:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.097 01:35:15 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:03.097 01:35:15 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:03.354 true 00:14:03.354 01:35:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:03.354 01:35:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.611 01:35:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.611 01:35:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:03.611 01:35:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:03.869 true 00:14:03.869 01:35:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:03.869 01:35:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.240 01:35:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:05.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.498 01:35:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:05.498 01:35:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:05.498 true 00:14:05.498 01:35:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:05.498 01:35:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.755 01:35:18 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.012 01:35:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:06.012 01:35:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:06.270 true 00:14:06.270 01:35:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:06.270 01:35:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.202 01:35:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.202 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.460 01:35:20 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:07.460 01:35:20 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:07.733 true 00:14:07.733 01:35:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:07.733 01:35:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.990 01:35:20 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.247 01:35:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:08.247 01:35:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:08.504 true 00:14:08.504 01:35:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:08.504 01:35:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.435 01:35:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.693 01:35:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:09.693 01:35:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:09.950 true 00:14:09.950 01:35:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:09.950 01:35:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:10.208 01:35:23 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.464 01:35:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:10.464 01:35:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:10.721 true 00:14:10.721 01:35:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:10.721 01:35:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.652 01:35:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.652 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.652 01:35:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:11.652 01:35:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:11.910 true 00:14:11.910 01:35:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:11.910 01:35:24 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.167 01:35:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.425 01:35:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:12.425 01:35:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:12.683 true 00:14:12.683 01:35:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:12.683 01:35:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.615 01:35:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:13.872 01:35:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:13.872 01:35:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:13.872 true 00:14:14.160 01:35:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:14.160 01:35:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.160 01:35:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.418 01:35:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:14.418 01:35:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:14.675 true 00:14:14.675 01:35:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:14.675 01:35:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.610 01:35:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.610 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.868 01:35:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:15.868 01:35:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:16.126 true 00:14:16.126 01:35:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:16.126 01:35:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.383 01:35:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.641 01:35:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:16.641 01:35:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:16.899 true 00:14:16.899 01:35:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:16.899 01:35:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:17.833 01:35:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.091 01:35:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:18.091 01:35:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:18.349 true 00:14:18.349 01:35:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:18.349 01:35:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.607 01:35:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.865 01:35:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:18.865 01:35:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:18.865 true 00:14:18.865 01:35:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:18.865 01:35:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.798 Initializing NVMe Controllers 00:14:19.798 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:19.798 Controller IO queue size 128, less than required. 00:14:19.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:19.798 Controller IO queue size 128, less than required. 00:14:19.798 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:19.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:19.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:19.798 Initialization complete. Launching workers. 00:14:19.798 ======================================================== 00:14:19.798 Latency(us) 00:14:19.798 Device Information : IOPS MiB/s Average min max 00:14:19.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 925.83 0.45 77504.91 2135.58 1088483.71 00:14:19.798 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12222.88 5.97 10472.19 2189.10 364271.97 00:14:19.798 ======================================================== 00:14:19.798 Total : 13148.71 6.42 15192.11 2135.58 1088483.71 00:14:19.798 00:14:19.798 01:35:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.056 01:35:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:20.056 01:35:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:20.319 true 00:14:20.319 01:35:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3732891 00:14:20.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3732891) - No such process 00:14:20.319 01:35:33 -- target/ns_hotplug_stress.sh@53 -- # wait 3732891 00:14:20.319 01:35:33 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.578 01:35:33 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:20.835 01:35:33 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:14:20.835 01:35:33 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:14:20.835 01:35:33 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:14:20.835 01:35:33 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:20.835 01:35:33 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:14:21.092 null0 00:14:21.092 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.092 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.092 01:35:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:21.350 null1 00:14:21.350 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.350 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.350 01:35:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:21.606 null2 00:14:21.606 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.607 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.607 01:35:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:21.864 null3 00:14:21.864 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:21.864 01:35:34 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:21.864 01:35:34 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:22.122 null4 00:14:22.122 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.122 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.122 01:35:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:22.379 null5 00:14:22.379 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.379 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.379 01:35:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:22.636 null6 00:14:22.636 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.636 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.636 01:35:35 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:22.636 null7 00:14:22.894 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:22.894 01:35:35 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@66 -- # wait 3737186 3737187 3737189 3737191 3737193 3737195 3737197 3737199 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:22.895 01:35:35 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.153 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.411 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.669 01:35:36 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:23.927 01:35:36 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.185 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.443 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:24.701 01:35:37 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:24.959 01:35:37 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.217 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.475 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:25.733 01:35:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:25.991 01:35:38 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.251 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:26.509 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:26.510 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:26.768 01:35:39 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.026 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.027 01:35:39 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.284 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.284 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.285 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:27.543 01:35:40 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:27.801 01:35:40 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:40 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:28.082 01:35:41 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:28.082 01:35:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:28.082 01:35:41 -- nvmf/common.sh@116 -- # sync 00:14:28.082 01:35:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:28.082 01:35:41 -- nvmf/common.sh@119 -- # set +e 00:14:28.082 01:35:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:28.082 01:35:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:28.082 rmmod nvme_tcp 00:14:28.082 rmmod nvme_fabrics 00:14:28.082 rmmod nvme_keyring 00:14:28.082 01:35:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:28.082 01:35:41 -- nvmf/common.sh@123 -- # set -e 00:14:28.082 01:35:41 -- nvmf/common.sh@124 -- # return 0 00:14:28.082 01:35:41 -- nvmf/common.sh@477 -- # '[' -n 3732562 ']' 00:14:28.082 01:35:41 -- nvmf/common.sh@478 -- # killprocess 3732562 00:14:28.082 01:35:41 -- common/autotest_common.sh@926 -- # '[' -z 3732562 ']' 00:14:28.082 01:35:41 -- common/autotest_common.sh@930 -- # kill -0 3732562 00:14:28.082 01:35:41 -- common/autotest_common.sh@931 -- # uname 00:14:28.082 01:35:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:28.082 01:35:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3732562 00:14:28.082 01:35:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:28.082 01:35:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:28.082 01:35:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3732562' 00:14:28.082 killing process with pid 3732562 00:14:28.082 01:35:41 -- common/autotest_common.sh@945 -- # kill 3732562 00:14:28.082 01:35:41 -- common/autotest_common.sh@950 -- # wait 3732562 00:14:28.346 01:35:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:28.346 01:35:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:28.346 01:35:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:28.346 01:35:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.346 01:35:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:28.346 01:35:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.346 01:35:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.346 01:35:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.878 01:35:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:30.878 00:14:30.878 real 0m46.881s 00:14:30.878 user 3m30.626s 00:14:30.878 sys 0m16.119s 00:14:30.878 01:35:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.878 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:14:30.878 ************************************ 00:14:30.878 END TEST nvmf_ns_hotplug_stress 00:14:30.878 ************************************ 00:14:30.878 01:35:43 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:30.878 01:35:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:30.878 01:35:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:30.878 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:14:30.878 ************************************ 00:14:30.878 START TEST nvmf_connect_stress 00:14:30.878 ************************************ 00:14:30.878 01:35:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:30.878 * Looking for test storage... 00:14:30.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.878 01:35:43 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.878 01:35:43 -- nvmf/common.sh@7 -- # uname -s 00:14:30.878 01:35:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.878 01:35:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.878 01:35:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.878 01:35:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.878 01:35:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.878 01:35:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.878 01:35:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.878 01:35:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.878 01:35:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.878 01:35:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.878 01:35:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.878 01:35:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.878 01:35:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.878 01:35:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.878 01:35:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.878 01:35:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.878 01:35:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.878 01:35:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.878 01:35:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.878 01:35:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.878 01:35:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.878 01:35:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.878 01:35:43 -- paths/export.sh@5 -- # export PATH 00:14:30.878 01:35:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.878 01:35:43 -- nvmf/common.sh@46 -- # : 0 00:14:30.878 01:35:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:30.879 01:35:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:30.879 01:35:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:30.879 01:35:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.879 01:35:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.879 01:35:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:30.879 01:35:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:30.879 01:35:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:30.879 01:35:43 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:30.879 01:35:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:30.879 01:35:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.879 01:35:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:30.879 01:35:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:30.879 01:35:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:30.879 01:35:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.879 01:35:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.879 01:35:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.879 01:35:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:30.879 01:35:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:30.879 01:35:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:30.879 01:35:43 -- common/autotest_common.sh@10 -- # set +x 00:14:32.782 01:35:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:32.782 01:35:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:32.782 01:35:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:32.782 01:35:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:32.782 01:35:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:32.782 01:35:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:32.782 01:35:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:32.782 01:35:45 -- nvmf/common.sh@294 -- # net_devs=() 00:14:32.782 01:35:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:32.782 01:35:45 -- nvmf/common.sh@295 -- # e810=() 00:14:32.782 01:35:45 -- nvmf/common.sh@295 -- # local -ga e810 00:14:32.782 01:35:45 -- nvmf/common.sh@296 -- # x722=() 00:14:32.782 01:35:45 -- nvmf/common.sh@296 -- # local -ga x722 00:14:32.782 01:35:45 -- nvmf/common.sh@297 -- # mlx=() 00:14:32.782 01:35:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:32.782 01:35:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.782 01:35:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:32.782 01:35:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:32.782 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:32.782 01:35:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:32.782 01:35:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:32.782 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:32.782 01:35:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:32.782 01:35:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.782 01:35:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.782 01:35:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:32.782 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:32.782 01:35:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:32.782 01:35:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.782 01:35:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.782 01:35:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:32.782 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:32.782 01:35:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:32.782 01:35:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:32.782 01:35:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.782 01:35:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.782 01:35:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:32.782 01:35:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.782 01:35:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.782 01:35:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:32.782 01:35:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.782 01:35:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.782 01:35:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:32.782 01:35:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:32.782 01:35:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.782 01:35:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.782 01:35:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.782 01:35:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.782 01:35:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:32.782 01:35:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.782 01:35:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.782 01:35:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.782 01:35:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:32.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:32.782 00:14:32.782 --- 10.0.0.2 ping statistics --- 00:14:32.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.782 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:32.782 01:35:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.246 ms 00:14:32.782 00:14:32.782 --- 10.0.0.1 ping statistics --- 00:14:32.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.782 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:14:32.782 01:35:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.782 01:35:45 -- nvmf/common.sh@410 -- # return 0 00:14:32.782 01:35:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:32.782 01:35:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.782 01:35:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:32.782 01:35:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.782 01:35:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:32.782 01:35:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:32.782 01:35:45 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:32.782 01:35:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:32.782 01:35:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:32.782 01:35:45 -- common/autotest_common.sh@10 -- # set +x 00:14:32.782 01:35:45 -- nvmf/common.sh@469 -- # nvmfpid=3739974 00:14:32.782 01:35:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:32.782 01:35:45 -- nvmf/common.sh@470 -- # waitforlisten 3739974 00:14:32.782 01:35:45 -- common/autotest_common.sh@819 -- # '[' -z 3739974 ']' 00:14:32.782 01:35:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.782 01:35:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:32.782 01:35:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.782 01:35:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:32.782 01:35:45 -- common/autotest_common.sh@10 -- # set +x 00:14:32.782 [2024-07-23 01:35:45.705682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:32.783 [2024-07-23 01:35:45.705749] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.783 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.783 [2024-07-23 01:35:45.773104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:32.783 [2024-07-23 01:35:45.861496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:32.783 [2024-07-23 01:35:45.861683] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.783 [2024-07-23 01:35:45.861705] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.783 [2024-07-23 01:35:45.861721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.783 [2024-07-23 01:35:45.861830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.783 [2024-07-23 01:35:45.861927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.783 [2024-07-23 01:35:45.861930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.717 01:35:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:33.717 01:35:46 -- common/autotest_common.sh@852 -- # return 0 00:14:33.717 01:35:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:33.717 01:35:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:33.717 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.717 01:35:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.717 01:35:46 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.717 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.717 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.717 [2024-07-23 01:35:46.648576] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.717 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.717 01:35:46 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:33.717 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.717 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.717 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.717 01:35:46 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.717 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.717 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.717 [2024-07-23 01:35:46.677756] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.717 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.717 01:35:46 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:33.717 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.717 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.717 NULL1 00:14:33.717 01:35:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.717 01:35:46 -- target/connect_stress.sh@21 -- # PERF_PID=3740084 00:14:33.717 01:35:46 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:33.718 01:35:46 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:33.718 01:35:46 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:33.718 01:35:46 -- target/connect_stress.sh@28 -- # cat 00:14:33.718 01:35:46 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:33.718 01:35:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.718 01:35:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.718 01:35:46 -- common/autotest_common.sh@10 -- # set +x 00:14:33.976 01:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.976 01:35:47 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:33.976 01:35:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.976 01:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.976 01:35:47 -- common/autotest_common.sh@10 -- # set +x 00:14:34.542 01:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.542 01:35:47 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:34.542 01:35:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.542 01:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.542 01:35:47 -- common/autotest_common.sh@10 -- # set +x 00:14:34.800 01:35:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.800 01:35:47 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:34.800 01:35:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.800 01:35:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.800 01:35:47 -- common/autotest_common.sh@10 -- # set +x 00:14:35.060 01:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.060 01:35:48 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:35.060 01:35:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.060 01:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.060 01:35:48 -- common/autotest_common.sh@10 -- # set +x 00:14:35.318 01:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.318 01:35:48 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:35.318 01:35:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.318 01:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.318 01:35:48 -- common/autotest_common.sh@10 -- # set +x 00:14:35.576 01:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.576 01:35:48 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:35.576 01:35:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.576 01:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.576 01:35:48 -- common/autotest_common.sh@10 -- # set +x 00:14:36.142 01:35:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.142 01:35:48 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:36.142 01:35:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.142 01:35:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.142 01:35:48 -- common/autotest_common.sh@10 -- # set +x 00:14:36.399 01:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.399 01:35:49 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:36.399 01:35:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.399 01:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.399 01:35:49 -- common/autotest_common.sh@10 -- # set +x 00:14:36.657 01:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.657 01:35:49 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:36.657 01:35:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.657 01:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.657 01:35:49 -- common/autotest_common.sh@10 -- # set +x 00:14:36.915 01:35:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.915 01:35:49 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:36.915 01:35:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.915 01:35:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.915 01:35:49 -- common/autotest_common.sh@10 -- # set +x 00:14:37.481 01:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.481 01:35:50 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:37.481 01:35:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.481 01:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.481 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.739 01:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.739 01:35:50 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:37.739 01:35:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.739 01:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.739 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:14:37.997 01:35:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.997 01:35:50 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:37.997 01:35:50 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.997 01:35:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.997 01:35:50 -- common/autotest_common.sh@10 -- # set +x 00:14:38.255 01:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.255 01:35:51 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:38.255 01:35:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.255 01:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.255 01:35:51 -- common/autotest_common.sh@10 -- # set +x 00:14:38.513 01:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.513 01:35:51 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:38.513 01:35:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.513 01:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.513 01:35:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.078 01:35:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.078 01:35:51 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:39.078 01:35:51 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.078 01:35:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.078 01:35:51 -- common/autotest_common.sh@10 -- # set +x 00:14:39.336 01:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.336 01:35:52 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:39.336 01:35:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.336 01:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.336 01:35:52 -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 01:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.595 01:35:52 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:39.595 01:35:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.595 01:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.595 01:35:52 -- common/autotest_common.sh@10 -- # set +x 00:14:39.852 01:35:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.852 01:35:52 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:39.852 01:35:52 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.852 01:35:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.852 01:35:52 -- common/autotest_common.sh@10 -- # set +x 00:14:40.110 01:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.110 01:35:53 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:40.110 01:35:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.110 01:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.110 01:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.676 01:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.676 01:35:53 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:40.676 01:35:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.676 01:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.676 01:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:40.933 01:35:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.933 01:35:53 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:40.933 01:35:53 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.933 01:35:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.933 01:35:53 -- common/autotest_common.sh@10 -- # set +x 00:14:41.191 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.191 01:35:54 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:41.191 01:35:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.191 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.191 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:14:41.449 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.449 01:35:54 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:41.449 01:35:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.449 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.449 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:14:41.707 01:35:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.707 01:35:54 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:41.707 01:35:54 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.707 01:35:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.707 01:35:54 -- common/autotest_common.sh@10 -- # set +x 00:14:42.271 01:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.271 01:35:55 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:42.271 01:35:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.271 01:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.271 01:35:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.528 01:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.528 01:35:55 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:42.528 01:35:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.528 01:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.528 01:35:55 -- common/autotest_common.sh@10 -- # set +x 00:14:42.786 01:35:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:42.786 01:35:55 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:42.786 01:35:55 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:42.786 01:35:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:42.786 01:35:55 -- common/autotest_common.sh@10 -- # set +x 00:14:43.044 01:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.044 01:35:56 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:43.044 01:35:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.044 01:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.044 01:35:56 -- common/autotest_common.sh@10 -- # set +x 00:14:43.302 01:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.302 01:35:56 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:43.302 01:35:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.302 01:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.302 01:35:56 -- common/autotest_common.sh@10 -- # set +x 00:14:43.867 01:35:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:43.867 01:35:56 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:43.867 01:35:56 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:43.867 01:35:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:43.867 01:35:56 -- common/autotest_common.sh@10 -- # set +x 00:14:43.867 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.154 01:35:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.154 01:35:57 -- target/connect_stress.sh@34 -- # kill -0 3740084 00:14:44.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3740084) - No such process 00:14:44.154 01:35:57 -- target/connect_stress.sh@38 -- # wait 3740084 00:14:44.154 01:35:57 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:44.154 01:35:57 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:44.154 01:35:57 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:44.154 01:35:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.154 01:35:57 -- nvmf/common.sh@116 -- # sync 00:14:44.154 01:35:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.154 01:35:57 -- nvmf/common.sh@119 -- # set +e 00:14:44.154 01:35:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.154 01:35:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.154 rmmod nvme_tcp 00:14:44.154 rmmod nvme_fabrics 00:14:44.154 rmmod nvme_keyring 00:14:44.154 01:35:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:44.154 01:35:57 -- nvmf/common.sh@123 -- # set -e 00:14:44.154 01:35:57 -- nvmf/common.sh@124 -- # return 0 00:14:44.154 01:35:57 -- nvmf/common.sh@477 -- # '[' -n 3739974 ']' 00:14:44.154 01:35:57 -- nvmf/common.sh@478 -- # killprocess 3739974 00:14:44.154 01:35:57 -- common/autotest_common.sh@926 -- # '[' -z 3739974 ']' 00:14:44.154 01:35:57 -- common/autotest_common.sh@930 -- # kill -0 3739974 00:14:44.154 01:35:57 -- common/autotest_common.sh@931 -- # uname 00:14:44.154 01:35:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:44.154 01:35:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3739974 00:14:44.154 01:35:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:44.154 01:35:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:44.154 01:35:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3739974' 00:14:44.154 killing process with pid 3739974 00:14:44.154 01:35:57 -- common/autotest_common.sh@945 -- # kill 3739974 00:14:44.154 01:35:57 -- common/autotest_common.sh@950 -- # wait 3739974 00:14:44.417 01:35:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:44.417 01:35:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:44.417 01:35:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:44.417 01:35:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:44.417 01:35:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:44.417 01:35:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.417 01:35:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.417 01:35:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.332 01:35:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:46.332 00:14:46.332 real 0m15.986s 00:14:46.332 user 0m40.497s 00:14:46.332 sys 0m5.887s 00:14:46.332 01:35:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.332 01:35:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.332 ************************************ 00:14:46.332 END TEST nvmf_connect_stress 00:14:46.332 ************************************ 00:14:46.332 01:35:59 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:46.332 01:35:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:46.332 01:35:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:46.332 01:35:59 -- common/autotest_common.sh@10 -- # set +x 00:14:46.332 ************************************ 00:14:46.332 START TEST nvmf_fused_ordering 00:14:46.332 ************************************ 00:14:46.332 01:35:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:46.590 * Looking for test storage... 00:14:46.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.590 01:35:59 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.591 01:35:59 -- nvmf/common.sh@7 -- # uname -s 00:14:46.591 01:35:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.591 01:35:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.591 01:35:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.591 01:35:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.591 01:35:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.591 01:35:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.591 01:35:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.591 01:35:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.591 01:35:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.591 01:35:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.591 01:35:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.591 01:35:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.591 01:35:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.591 01:35:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.591 01:35:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.591 01:35:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.591 01:35:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.591 01:35:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.591 01:35:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.591 01:35:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.591 01:35:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.591 01:35:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.591 01:35:59 -- paths/export.sh@5 -- # export PATH 00:14:46.591 01:35:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.591 01:35:59 -- nvmf/common.sh@46 -- # : 0 00:14:46.591 01:35:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:46.591 01:35:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:46.591 01:35:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:46.591 01:35:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.591 01:35:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.591 01:35:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:46.591 01:35:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:46.591 01:35:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:46.591 01:35:59 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:46.591 01:35:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:46.591 01:35:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.591 01:35:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:46.591 01:35:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:46.591 01:35:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:46.591 01:35:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.591 01:35:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.591 01:35:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.591 01:35:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:46.591 01:35:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:46.591 01:35:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:46.591 01:35:59 -- common/autotest_common.sh@10 -- # set +x 00:14:48.492 01:36:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:48.492 01:36:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:48.492 01:36:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:48.492 01:36:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:48.492 01:36:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:48.492 01:36:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:48.492 01:36:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:48.492 01:36:01 -- nvmf/common.sh@294 -- # net_devs=() 00:14:48.492 01:36:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:48.492 01:36:01 -- nvmf/common.sh@295 -- # e810=() 00:14:48.492 01:36:01 -- nvmf/common.sh@295 -- # local -ga e810 00:14:48.492 01:36:01 -- nvmf/common.sh@296 -- # x722=() 00:14:48.492 01:36:01 -- nvmf/common.sh@296 -- # local -ga x722 00:14:48.492 01:36:01 -- nvmf/common.sh@297 -- # mlx=() 00:14:48.493 01:36:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:48.493 01:36:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:48.493 01:36:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:48.493 01:36:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:48.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:48.493 01:36:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:48.493 01:36:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:48.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:48.493 01:36:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:48.493 01:36:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.493 01:36:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.493 01:36:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:48.493 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:48.493 01:36:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:48.493 01:36:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:48.493 01:36:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:48.493 01:36:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:48.493 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:48.493 01:36:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:48.493 01:36:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:48.493 01:36:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:48.493 01:36:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:48.493 01:36:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:48.493 01:36:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:48.493 01:36:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:48.493 01:36:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:48.493 01:36:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:48.493 01:36:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:48.493 01:36:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:48.493 01:36:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:48.493 01:36:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:48.493 01:36:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:48.493 01:36:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:48.493 01:36:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:48.493 01:36:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:48.493 01:36:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:48.493 01:36:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:48.493 01:36:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:48.493 01:36:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:48.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:48.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:14:48.493 00:14:48.493 --- 10.0.0.2 ping statistics --- 00:14:48.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.493 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:14:48.493 01:36:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:48.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:48.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:14:48.493 00:14:48.493 --- 10.0.0.1 ping statistics --- 00:14:48.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:48.493 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:48.493 01:36:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:48.493 01:36:01 -- nvmf/common.sh@410 -- # return 0 00:14:48.493 01:36:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:48.493 01:36:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:48.493 01:36:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:48.493 01:36:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:48.493 01:36:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:48.493 01:36:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:48.493 01:36:01 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:48.493 01:36:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:48.493 01:36:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:48.493 01:36:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.493 01:36:01 -- nvmf/common.sh@469 -- # nvmfpid=3743408 00:14:48.493 01:36:01 -- nvmf/common.sh@470 -- # waitforlisten 3743408 00:14:48.493 01:36:01 -- common/autotest_common.sh@819 -- # '[' -z 3743408 ']' 00:14:48.493 01:36:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.493 01:36:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.493 01:36:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.493 01:36:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:48.493 01:36:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.493 01:36:01 -- common/autotest_common.sh@10 -- # set +x 00:14:48.751 [2024-07-23 01:36:01.595796] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:48.751 [2024-07-23 01:36:01.595892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.751 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.751 [2024-07-23 01:36:01.668497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.751 [2024-07-23 01:36:01.757955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.751 [2024-07-23 01:36:01.758139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.751 [2024-07-23 01:36:01.758159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.751 [2024-07-23 01:36:01.758174] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.752 [2024-07-23 01:36:01.758214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.683 01:36:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:49.683 01:36:02 -- common/autotest_common.sh@852 -- # return 0 00:14:49.683 01:36:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:49.683 01:36:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 01:36:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.683 01:36:02 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 [2024-07-23 01:36:02.557916] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 [2024-07-23 01:36:02.574086] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 NULL1 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:49.683 01:36:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.683 01:36:02 -- common/autotest_common.sh@10 -- # set +x 00:14:49.683 01:36:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.683 01:36:02 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:49.683 [2024-07-23 01:36:02.617959] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:49.683 [2024-07-23 01:36:02.618006] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743587 ] 00:14:49.683 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.248 Attached to nqn.2016-06.io.spdk:cnode1 00:14:50.248 Namespace ID: 1 size: 1GB 00:14:50.248 fused_ordering(0) 00:14:50.248 fused_ordering(1) 00:14:50.248 fused_ordering(2) 00:14:50.248 fused_ordering(3) 00:14:50.248 fused_ordering(4) 00:14:50.248 fused_ordering(5) 00:14:50.248 fused_ordering(6) 00:14:50.248 fused_ordering(7) 00:14:50.248 fused_ordering(8) 00:14:50.248 fused_ordering(9) 00:14:50.248 fused_ordering(10) 00:14:50.248 fused_ordering(11) 00:14:50.248 fused_ordering(12) 00:14:50.248 fused_ordering(13) 00:14:50.248 fused_ordering(14) 00:14:50.248 fused_ordering(15) 00:14:50.248 fused_ordering(16) 00:14:50.248 fused_ordering(17) 00:14:50.248 fused_ordering(18) 00:14:50.248 fused_ordering(19) 00:14:50.248 fused_ordering(20) 00:14:50.248 fused_ordering(21) 00:14:50.248 fused_ordering(22) 00:14:50.248 fused_ordering(23) 00:14:50.248 fused_ordering(24) 00:14:50.248 fused_ordering(25) 00:14:50.248 fused_ordering(26) 00:14:50.248 fused_ordering(27) 00:14:50.248 fused_ordering(28) 00:14:50.248 fused_ordering(29) 00:14:50.248 fused_ordering(30) 00:14:50.248 fused_ordering(31) 00:14:50.248 fused_ordering(32) 00:14:50.248 fused_ordering(33) 00:14:50.248 fused_ordering(34) 00:14:50.248 fused_ordering(35) 00:14:50.248 fused_ordering(36) 00:14:50.248 fused_ordering(37) 00:14:50.248 fused_ordering(38) 00:14:50.248 fused_ordering(39) 00:14:50.248 fused_ordering(40) 00:14:50.248 fused_ordering(41) 00:14:50.248 fused_ordering(42) 00:14:50.248 fused_ordering(43) 00:14:50.248 fused_ordering(44) 00:14:50.248 fused_ordering(45) 00:14:50.248 fused_ordering(46) 00:14:50.248 fused_ordering(47) 00:14:50.248 fused_ordering(48) 00:14:50.248 fused_ordering(49) 00:14:50.248 fused_ordering(50) 00:14:50.248 fused_ordering(51) 00:14:50.248 fused_ordering(52) 00:14:50.248 fused_ordering(53) 00:14:50.248 fused_ordering(54) 00:14:50.248 fused_ordering(55) 00:14:50.248 fused_ordering(56) 00:14:50.248 fused_ordering(57) 00:14:50.248 fused_ordering(58) 00:14:50.248 fused_ordering(59) 00:14:50.248 fused_ordering(60) 00:14:50.248 fused_ordering(61) 00:14:50.248 fused_ordering(62) 00:14:50.248 fused_ordering(63) 00:14:50.248 fused_ordering(64) 00:14:50.248 fused_ordering(65) 00:14:50.248 fused_ordering(66) 00:14:50.248 fused_ordering(67) 00:14:50.248 fused_ordering(68) 00:14:50.248 fused_ordering(69) 00:14:50.248 fused_ordering(70) 00:14:50.248 fused_ordering(71) 00:14:50.248 fused_ordering(72) 00:14:50.248 fused_ordering(73) 00:14:50.248 fused_ordering(74) 00:14:50.248 fused_ordering(75) 00:14:50.248 fused_ordering(76) 00:14:50.248 fused_ordering(77) 00:14:50.248 fused_ordering(78) 00:14:50.248 fused_ordering(79) 00:14:50.248 fused_ordering(80) 00:14:50.248 fused_ordering(81) 00:14:50.248 fused_ordering(82) 00:14:50.248 fused_ordering(83) 00:14:50.248 fused_ordering(84) 00:14:50.248 fused_ordering(85) 00:14:50.248 fused_ordering(86) 00:14:50.248 fused_ordering(87) 00:14:50.248 fused_ordering(88) 00:14:50.248 fused_ordering(89) 00:14:50.248 fused_ordering(90) 00:14:50.248 fused_ordering(91) 00:14:50.248 fused_ordering(92) 00:14:50.248 fused_ordering(93) 00:14:50.248 fused_ordering(94) 00:14:50.248 fused_ordering(95) 00:14:50.248 fused_ordering(96) 00:14:50.248 fused_ordering(97) 00:14:50.248 fused_ordering(98) 00:14:50.248 fused_ordering(99) 00:14:50.248 fused_ordering(100) 00:14:50.248 fused_ordering(101) 00:14:50.248 fused_ordering(102) 00:14:50.248 fused_ordering(103) 00:14:50.248 fused_ordering(104) 00:14:50.248 fused_ordering(105) 00:14:50.248 fused_ordering(106) 00:14:50.248 fused_ordering(107) 00:14:50.248 fused_ordering(108) 00:14:50.248 fused_ordering(109) 00:14:50.248 fused_ordering(110) 00:14:50.248 fused_ordering(111) 00:14:50.248 fused_ordering(112) 00:14:50.248 fused_ordering(113) 00:14:50.248 fused_ordering(114) 00:14:50.248 fused_ordering(115) 00:14:50.248 fused_ordering(116) 00:14:50.248 fused_ordering(117) 00:14:50.248 fused_ordering(118) 00:14:50.248 fused_ordering(119) 00:14:50.248 fused_ordering(120) 00:14:50.248 fused_ordering(121) 00:14:50.248 fused_ordering(122) 00:14:50.248 fused_ordering(123) 00:14:50.248 fused_ordering(124) 00:14:50.248 fused_ordering(125) 00:14:50.248 fused_ordering(126) 00:14:50.248 fused_ordering(127) 00:14:50.248 fused_ordering(128) 00:14:50.248 fused_ordering(129) 00:14:50.248 fused_ordering(130) 00:14:50.248 fused_ordering(131) 00:14:50.248 fused_ordering(132) 00:14:50.248 fused_ordering(133) 00:14:50.248 fused_ordering(134) 00:14:50.248 fused_ordering(135) 00:14:50.248 fused_ordering(136) 00:14:50.248 fused_ordering(137) 00:14:50.248 fused_ordering(138) 00:14:50.248 fused_ordering(139) 00:14:50.248 fused_ordering(140) 00:14:50.248 fused_ordering(141) 00:14:50.248 fused_ordering(142) 00:14:50.248 fused_ordering(143) 00:14:50.248 fused_ordering(144) 00:14:50.248 fused_ordering(145) 00:14:50.248 fused_ordering(146) 00:14:50.248 fused_ordering(147) 00:14:50.248 fused_ordering(148) 00:14:50.248 fused_ordering(149) 00:14:50.248 fused_ordering(150) 00:14:50.248 fused_ordering(151) 00:14:50.248 fused_ordering(152) 00:14:50.248 fused_ordering(153) 00:14:50.248 fused_ordering(154) 00:14:50.248 fused_ordering(155) 00:14:50.248 fused_ordering(156) 00:14:50.248 fused_ordering(157) 00:14:50.248 fused_ordering(158) 00:14:50.248 fused_ordering(159) 00:14:50.248 fused_ordering(160) 00:14:50.248 fused_ordering(161) 00:14:50.248 fused_ordering(162) 00:14:50.248 fused_ordering(163) 00:14:50.248 fused_ordering(164) 00:14:50.248 fused_ordering(165) 00:14:50.248 fused_ordering(166) 00:14:50.248 fused_ordering(167) 00:14:50.248 fused_ordering(168) 00:14:50.248 fused_ordering(169) 00:14:50.248 fused_ordering(170) 00:14:50.248 fused_ordering(171) 00:14:50.248 fused_ordering(172) 00:14:50.248 fused_ordering(173) 00:14:50.248 fused_ordering(174) 00:14:50.248 fused_ordering(175) 00:14:50.248 fused_ordering(176) 00:14:50.248 fused_ordering(177) 00:14:50.248 fused_ordering(178) 00:14:50.248 fused_ordering(179) 00:14:50.248 fused_ordering(180) 00:14:50.248 fused_ordering(181) 00:14:50.248 fused_ordering(182) 00:14:50.248 fused_ordering(183) 00:14:50.248 fused_ordering(184) 00:14:50.248 fused_ordering(185) 00:14:50.248 fused_ordering(186) 00:14:50.248 fused_ordering(187) 00:14:50.248 fused_ordering(188) 00:14:50.248 fused_ordering(189) 00:14:50.248 fused_ordering(190) 00:14:50.248 fused_ordering(191) 00:14:50.248 fused_ordering(192) 00:14:50.248 fused_ordering(193) 00:14:50.248 fused_ordering(194) 00:14:50.248 fused_ordering(195) 00:14:50.248 fused_ordering(196) 00:14:50.248 fused_ordering(197) 00:14:50.248 fused_ordering(198) 00:14:50.248 fused_ordering(199) 00:14:50.248 fused_ordering(200) 00:14:50.248 fused_ordering(201) 00:14:50.248 fused_ordering(202) 00:14:50.248 fused_ordering(203) 00:14:50.248 fused_ordering(204) 00:14:50.248 fused_ordering(205) 00:14:50.813 fused_ordering(206) 00:14:50.813 fused_ordering(207) 00:14:50.813 fused_ordering(208) 00:14:50.813 fused_ordering(209) 00:14:50.813 fused_ordering(210) 00:14:50.813 fused_ordering(211) 00:14:50.813 fused_ordering(212) 00:14:50.813 fused_ordering(213) 00:14:50.813 fused_ordering(214) 00:14:50.813 fused_ordering(215) 00:14:50.813 fused_ordering(216) 00:14:50.813 fused_ordering(217) 00:14:50.813 fused_ordering(218) 00:14:50.813 fused_ordering(219) 00:14:50.813 fused_ordering(220) 00:14:50.813 fused_ordering(221) 00:14:50.813 fused_ordering(222) 00:14:50.813 fused_ordering(223) 00:14:50.813 fused_ordering(224) 00:14:50.813 fused_ordering(225) 00:14:50.813 fused_ordering(226) 00:14:50.813 fused_ordering(227) 00:14:50.813 fused_ordering(228) 00:14:50.813 fused_ordering(229) 00:14:50.813 fused_ordering(230) 00:14:50.813 fused_ordering(231) 00:14:50.813 fused_ordering(232) 00:14:50.813 fused_ordering(233) 00:14:50.813 fused_ordering(234) 00:14:50.813 fused_ordering(235) 00:14:50.813 fused_ordering(236) 00:14:50.813 fused_ordering(237) 00:14:50.813 fused_ordering(238) 00:14:50.813 fused_ordering(239) 00:14:50.813 fused_ordering(240) 00:14:50.813 fused_ordering(241) 00:14:50.813 fused_ordering(242) 00:14:50.813 fused_ordering(243) 00:14:50.813 fused_ordering(244) 00:14:50.813 fused_ordering(245) 00:14:50.813 fused_ordering(246) 00:14:50.813 fused_ordering(247) 00:14:50.813 fused_ordering(248) 00:14:50.813 fused_ordering(249) 00:14:50.813 fused_ordering(250) 00:14:50.813 fused_ordering(251) 00:14:50.813 fused_ordering(252) 00:14:50.813 fused_ordering(253) 00:14:50.813 fused_ordering(254) 00:14:50.813 fused_ordering(255) 00:14:50.813 fused_ordering(256) 00:14:50.813 fused_ordering(257) 00:14:50.813 fused_ordering(258) 00:14:50.813 fused_ordering(259) 00:14:50.813 fused_ordering(260) 00:14:50.813 fused_ordering(261) 00:14:50.813 fused_ordering(262) 00:14:50.813 fused_ordering(263) 00:14:50.813 fused_ordering(264) 00:14:50.813 fused_ordering(265) 00:14:50.813 fused_ordering(266) 00:14:50.813 fused_ordering(267) 00:14:50.813 fused_ordering(268) 00:14:50.813 fused_ordering(269) 00:14:50.813 fused_ordering(270) 00:14:50.813 fused_ordering(271) 00:14:50.813 fused_ordering(272) 00:14:50.813 fused_ordering(273) 00:14:50.813 fused_ordering(274) 00:14:50.813 fused_ordering(275) 00:14:50.813 fused_ordering(276) 00:14:50.813 fused_ordering(277) 00:14:50.813 fused_ordering(278) 00:14:50.813 fused_ordering(279) 00:14:50.813 fused_ordering(280) 00:14:50.813 fused_ordering(281) 00:14:50.813 fused_ordering(282) 00:14:50.813 fused_ordering(283) 00:14:50.813 fused_ordering(284) 00:14:50.813 fused_ordering(285) 00:14:50.813 fused_ordering(286) 00:14:50.813 fused_ordering(287) 00:14:50.813 fused_ordering(288) 00:14:50.813 fused_ordering(289) 00:14:50.813 fused_ordering(290) 00:14:50.813 fused_ordering(291) 00:14:50.813 fused_ordering(292) 00:14:50.813 fused_ordering(293) 00:14:50.813 fused_ordering(294) 00:14:50.813 fused_ordering(295) 00:14:50.813 fused_ordering(296) 00:14:50.813 fused_ordering(297) 00:14:50.813 fused_ordering(298) 00:14:50.813 fused_ordering(299) 00:14:50.813 fused_ordering(300) 00:14:50.813 fused_ordering(301) 00:14:50.813 fused_ordering(302) 00:14:50.813 fused_ordering(303) 00:14:50.813 fused_ordering(304) 00:14:50.813 fused_ordering(305) 00:14:50.813 fused_ordering(306) 00:14:50.813 fused_ordering(307) 00:14:50.813 fused_ordering(308) 00:14:50.813 fused_ordering(309) 00:14:50.813 fused_ordering(310) 00:14:50.813 fused_ordering(311) 00:14:50.813 fused_ordering(312) 00:14:50.813 fused_ordering(313) 00:14:50.813 fused_ordering(314) 00:14:50.813 fused_ordering(315) 00:14:50.813 fused_ordering(316) 00:14:50.813 fused_ordering(317) 00:14:50.813 fused_ordering(318) 00:14:50.813 fused_ordering(319) 00:14:50.813 fused_ordering(320) 00:14:50.813 fused_ordering(321) 00:14:50.813 fused_ordering(322) 00:14:50.813 fused_ordering(323) 00:14:50.813 fused_ordering(324) 00:14:50.813 fused_ordering(325) 00:14:50.813 fused_ordering(326) 00:14:50.813 fused_ordering(327) 00:14:50.813 fused_ordering(328) 00:14:50.813 fused_ordering(329) 00:14:50.813 fused_ordering(330) 00:14:50.813 fused_ordering(331) 00:14:50.813 fused_ordering(332) 00:14:50.813 fused_ordering(333) 00:14:50.813 fused_ordering(334) 00:14:50.813 fused_ordering(335) 00:14:50.813 fused_ordering(336) 00:14:50.813 fused_ordering(337) 00:14:50.813 fused_ordering(338) 00:14:50.813 fused_ordering(339) 00:14:50.813 fused_ordering(340) 00:14:50.813 fused_ordering(341) 00:14:50.813 fused_ordering(342) 00:14:50.813 fused_ordering(343) 00:14:50.813 fused_ordering(344) 00:14:50.813 fused_ordering(345) 00:14:50.813 fused_ordering(346) 00:14:50.813 fused_ordering(347) 00:14:50.813 fused_ordering(348) 00:14:50.813 fused_ordering(349) 00:14:50.813 fused_ordering(350) 00:14:50.813 fused_ordering(351) 00:14:50.813 fused_ordering(352) 00:14:50.813 fused_ordering(353) 00:14:50.813 fused_ordering(354) 00:14:50.813 fused_ordering(355) 00:14:50.813 fused_ordering(356) 00:14:50.813 fused_ordering(357) 00:14:50.813 fused_ordering(358) 00:14:50.813 fused_ordering(359) 00:14:50.813 fused_ordering(360) 00:14:50.813 fused_ordering(361) 00:14:50.813 fused_ordering(362) 00:14:50.813 fused_ordering(363) 00:14:50.813 fused_ordering(364) 00:14:50.813 fused_ordering(365) 00:14:50.813 fused_ordering(366) 00:14:50.813 fused_ordering(367) 00:14:50.813 fused_ordering(368) 00:14:50.813 fused_ordering(369) 00:14:50.813 fused_ordering(370) 00:14:50.813 fused_ordering(371) 00:14:50.813 fused_ordering(372) 00:14:50.813 fused_ordering(373) 00:14:50.813 fused_ordering(374) 00:14:50.813 fused_ordering(375) 00:14:50.813 fused_ordering(376) 00:14:50.813 fused_ordering(377) 00:14:50.813 fused_ordering(378) 00:14:50.813 fused_ordering(379) 00:14:50.813 fused_ordering(380) 00:14:50.813 fused_ordering(381) 00:14:50.813 fused_ordering(382) 00:14:50.813 fused_ordering(383) 00:14:50.813 fused_ordering(384) 00:14:50.813 fused_ordering(385) 00:14:50.813 fused_ordering(386) 00:14:50.813 fused_ordering(387) 00:14:50.813 fused_ordering(388) 00:14:50.813 fused_ordering(389) 00:14:50.813 fused_ordering(390) 00:14:50.813 fused_ordering(391) 00:14:50.813 fused_ordering(392) 00:14:50.813 fused_ordering(393) 00:14:50.813 fused_ordering(394) 00:14:50.813 fused_ordering(395) 00:14:50.813 fused_ordering(396) 00:14:50.813 fused_ordering(397) 00:14:50.813 fused_ordering(398) 00:14:50.813 fused_ordering(399) 00:14:50.814 fused_ordering(400) 00:14:50.814 fused_ordering(401) 00:14:50.814 fused_ordering(402) 00:14:50.814 fused_ordering(403) 00:14:50.814 fused_ordering(404) 00:14:50.814 fused_ordering(405) 00:14:50.814 fused_ordering(406) 00:14:50.814 fused_ordering(407) 00:14:50.814 fused_ordering(408) 00:14:50.814 fused_ordering(409) 00:14:50.814 fused_ordering(410) 00:14:51.380 fused_ordering(411) 00:14:51.380 fused_ordering(412) 00:14:51.380 fused_ordering(413) 00:14:51.380 fused_ordering(414) 00:14:51.380 fused_ordering(415) 00:14:51.380 fused_ordering(416) 00:14:51.380 fused_ordering(417) 00:14:51.380 fused_ordering(418) 00:14:51.380 fused_ordering(419) 00:14:51.380 fused_ordering(420) 00:14:51.380 fused_ordering(421) 00:14:51.380 fused_ordering(422) 00:14:51.380 fused_ordering(423) 00:14:51.380 fused_ordering(424) 00:14:51.380 fused_ordering(425) 00:14:51.380 fused_ordering(426) 00:14:51.380 fused_ordering(427) 00:14:51.380 fused_ordering(428) 00:14:51.380 fused_ordering(429) 00:14:51.380 fused_ordering(430) 00:14:51.380 fused_ordering(431) 00:14:51.380 fused_ordering(432) 00:14:51.380 fused_ordering(433) 00:14:51.380 fused_ordering(434) 00:14:51.380 fused_ordering(435) 00:14:51.380 fused_ordering(436) 00:14:51.380 fused_ordering(437) 00:14:51.380 fused_ordering(438) 00:14:51.380 fused_ordering(439) 00:14:51.380 fused_ordering(440) 00:14:51.380 fused_ordering(441) 00:14:51.380 fused_ordering(442) 00:14:51.380 fused_ordering(443) 00:14:51.380 fused_ordering(444) 00:14:51.380 fused_ordering(445) 00:14:51.380 fused_ordering(446) 00:14:51.380 fused_ordering(447) 00:14:51.380 fused_ordering(448) 00:14:51.380 fused_ordering(449) 00:14:51.380 fused_ordering(450) 00:14:51.380 fused_ordering(451) 00:14:51.380 fused_ordering(452) 00:14:51.380 fused_ordering(453) 00:14:51.380 fused_ordering(454) 00:14:51.380 fused_ordering(455) 00:14:51.380 fused_ordering(456) 00:14:51.380 fused_ordering(457) 00:14:51.380 fused_ordering(458) 00:14:51.380 fused_ordering(459) 00:14:51.380 fused_ordering(460) 00:14:51.380 fused_ordering(461) 00:14:51.380 fused_ordering(462) 00:14:51.380 fused_ordering(463) 00:14:51.380 fused_ordering(464) 00:14:51.380 fused_ordering(465) 00:14:51.380 fused_ordering(466) 00:14:51.380 fused_ordering(467) 00:14:51.380 fused_ordering(468) 00:14:51.380 fused_ordering(469) 00:14:51.380 fused_ordering(470) 00:14:51.380 fused_ordering(471) 00:14:51.380 fused_ordering(472) 00:14:51.380 fused_ordering(473) 00:14:51.380 fused_ordering(474) 00:14:51.380 fused_ordering(475) 00:14:51.380 fused_ordering(476) 00:14:51.380 fused_ordering(477) 00:14:51.380 fused_ordering(478) 00:14:51.380 fused_ordering(479) 00:14:51.380 fused_ordering(480) 00:14:51.380 fused_ordering(481) 00:14:51.380 fused_ordering(482) 00:14:51.380 fused_ordering(483) 00:14:51.380 fused_ordering(484) 00:14:51.380 fused_ordering(485) 00:14:51.380 fused_ordering(486) 00:14:51.380 fused_ordering(487) 00:14:51.380 fused_ordering(488) 00:14:51.380 fused_ordering(489) 00:14:51.380 fused_ordering(490) 00:14:51.380 fused_ordering(491) 00:14:51.380 fused_ordering(492) 00:14:51.380 fused_ordering(493) 00:14:51.380 fused_ordering(494) 00:14:51.380 fused_ordering(495) 00:14:51.380 fused_ordering(496) 00:14:51.380 fused_ordering(497) 00:14:51.380 fused_ordering(498) 00:14:51.380 fused_ordering(499) 00:14:51.380 fused_ordering(500) 00:14:51.380 fused_ordering(501) 00:14:51.380 fused_ordering(502) 00:14:51.380 fused_ordering(503) 00:14:51.380 fused_ordering(504) 00:14:51.380 fused_ordering(505) 00:14:51.380 fused_ordering(506) 00:14:51.380 fused_ordering(507) 00:14:51.380 fused_ordering(508) 00:14:51.380 fused_ordering(509) 00:14:51.380 fused_ordering(510) 00:14:51.380 fused_ordering(511) 00:14:51.380 fused_ordering(512) 00:14:51.380 fused_ordering(513) 00:14:51.380 fused_ordering(514) 00:14:51.380 fused_ordering(515) 00:14:51.380 fused_ordering(516) 00:14:51.380 fused_ordering(517) 00:14:51.380 fused_ordering(518) 00:14:51.380 fused_ordering(519) 00:14:51.380 fused_ordering(520) 00:14:51.380 fused_ordering(521) 00:14:51.380 fused_ordering(522) 00:14:51.380 fused_ordering(523) 00:14:51.380 fused_ordering(524) 00:14:51.380 fused_ordering(525) 00:14:51.380 fused_ordering(526) 00:14:51.380 fused_ordering(527) 00:14:51.380 fused_ordering(528) 00:14:51.380 fused_ordering(529) 00:14:51.380 fused_ordering(530) 00:14:51.380 fused_ordering(531) 00:14:51.380 fused_ordering(532) 00:14:51.380 fused_ordering(533) 00:14:51.380 fused_ordering(534) 00:14:51.380 fused_ordering(535) 00:14:51.380 fused_ordering(536) 00:14:51.380 fused_ordering(537) 00:14:51.380 fused_ordering(538) 00:14:51.380 fused_ordering(539) 00:14:51.380 fused_ordering(540) 00:14:51.380 fused_ordering(541) 00:14:51.380 fused_ordering(542) 00:14:51.380 fused_ordering(543) 00:14:51.380 fused_ordering(544) 00:14:51.380 fused_ordering(545) 00:14:51.380 fused_ordering(546) 00:14:51.380 fused_ordering(547) 00:14:51.380 fused_ordering(548) 00:14:51.380 fused_ordering(549) 00:14:51.380 fused_ordering(550) 00:14:51.380 fused_ordering(551) 00:14:51.380 fused_ordering(552) 00:14:51.380 fused_ordering(553) 00:14:51.380 fused_ordering(554) 00:14:51.380 fused_ordering(555) 00:14:51.380 fused_ordering(556) 00:14:51.380 fused_ordering(557) 00:14:51.380 fused_ordering(558) 00:14:51.380 fused_ordering(559) 00:14:51.380 fused_ordering(560) 00:14:51.380 fused_ordering(561) 00:14:51.380 fused_ordering(562) 00:14:51.380 fused_ordering(563) 00:14:51.380 fused_ordering(564) 00:14:51.380 fused_ordering(565) 00:14:51.380 fused_ordering(566) 00:14:51.380 fused_ordering(567) 00:14:51.380 fused_ordering(568) 00:14:51.380 fused_ordering(569) 00:14:51.380 fused_ordering(570) 00:14:51.380 fused_ordering(571) 00:14:51.380 fused_ordering(572) 00:14:51.380 fused_ordering(573) 00:14:51.380 fused_ordering(574) 00:14:51.380 fused_ordering(575) 00:14:51.380 fused_ordering(576) 00:14:51.380 fused_ordering(577) 00:14:51.380 fused_ordering(578) 00:14:51.380 fused_ordering(579) 00:14:51.380 fused_ordering(580) 00:14:51.380 fused_ordering(581) 00:14:51.380 fused_ordering(582) 00:14:51.380 fused_ordering(583) 00:14:51.380 fused_ordering(584) 00:14:51.380 fused_ordering(585) 00:14:51.380 fused_ordering(586) 00:14:51.380 fused_ordering(587) 00:14:51.380 fused_ordering(588) 00:14:51.380 fused_ordering(589) 00:14:51.380 fused_ordering(590) 00:14:51.380 fused_ordering(591) 00:14:51.380 fused_ordering(592) 00:14:51.380 fused_ordering(593) 00:14:51.380 fused_ordering(594) 00:14:51.380 fused_ordering(595) 00:14:51.380 fused_ordering(596) 00:14:51.380 fused_ordering(597) 00:14:51.380 fused_ordering(598) 00:14:51.380 fused_ordering(599) 00:14:51.380 fused_ordering(600) 00:14:51.380 fused_ordering(601) 00:14:51.380 fused_ordering(602) 00:14:51.380 fused_ordering(603) 00:14:51.380 fused_ordering(604) 00:14:51.380 fused_ordering(605) 00:14:51.380 fused_ordering(606) 00:14:51.380 fused_ordering(607) 00:14:51.380 fused_ordering(608) 00:14:51.380 fused_ordering(609) 00:14:51.380 fused_ordering(610) 00:14:51.380 fused_ordering(611) 00:14:51.380 fused_ordering(612) 00:14:51.380 fused_ordering(613) 00:14:51.380 fused_ordering(614) 00:14:51.380 fused_ordering(615) 00:14:52.314 fused_ordering(616) 00:14:52.314 fused_ordering(617) 00:14:52.314 fused_ordering(618) 00:14:52.314 fused_ordering(619) 00:14:52.314 fused_ordering(620) 00:14:52.314 fused_ordering(621) 00:14:52.314 fused_ordering(622) 00:14:52.314 fused_ordering(623) 00:14:52.314 fused_ordering(624) 00:14:52.314 fused_ordering(625) 00:14:52.314 fused_ordering(626) 00:14:52.314 fused_ordering(627) 00:14:52.314 fused_ordering(628) 00:14:52.314 fused_ordering(629) 00:14:52.314 fused_ordering(630) 00:14:52.314 fused_ordering(631) 00:14:52.314 fused_ordering(632) 00:14:52.314 fused_ordering(633) 00:14:52.314 fused_ordering(634) 00:14:52.314 fused_ordering(635) 00:14:52.314 fused_ordering(636) 00:14:52.314 fused_ordering(637) 00:14:52.314 fused_ordering(638) 00:14:52.314 fused_ordering(639) 00:14:52.314 fused_ordering(640) 00:14:52.314 fused_ordering(641) 00:14:52.314 fused_ordering(642) 00:14:52.314 fused_ordering(643) 00:14:52.314 fused_ordering(644) 00:14:52.314 fused_ordering(645) 00:14:52.314 fused_ordering(646) 00:14:52.314 fused_ordering(647) 00:14:52.314 fused_ordering(648) 00:14:52.314 fused_ordering(649) 00:14:52.314 fused_ordering(650) 00:14:52.314 fused_ordering(651) 00:14:52.314 fused_ordering(652) 00:14:52.314 fused_ordering(653) 00:14:52.314 fused_ordering(654) 00:14:52.314 fused_ordering(655) 00:14:52.314 fused_ordering(656) 00:14:52.314 fused_ordering(657) 00:14:52.314 fused_ordering(658) 00:14:52.314 fused_ordering(659) 00:14:52.314 fused_ordering(660) 00:14:52.314 fused_ordering(661) 00:14:52.314 fused_ordering(662) 00:14:52.314 fused_ordering(663) 00:14:52.314 fused_ordering(664) 00:14:52.314 fused_ordering(665) 00:14:52.314 fused_ordering(666) 00:14:52.314 fused_ordering(667) 00:14:52.314 fused_ordering(668) 00:14:52.314 fused_ordering(669) 00:14:52.314 fused_ordering(670) 00:14:52.314 fused_ordering(671) 00:14:52.314 fused_ordering(672) 00:14:52.314 fused_ordering(673) 00:14:52.314 fused_ordering(674) 00:14:52.314 fused_ordering(675) 00:14:52.314 fused_ordering(676) 00:14:52.314 fused_ordering(677) 00:14:52.314 fused_ordering(678) 00:14:52.314 fused_ordering(679) 00:14:52.314 fused_ordering(680) 00:14:52.314 fused_ordering(681) 00:14:52.314 fused_ordering(682) 00:14:52.314 fused_ordering(683) 00:14:52.314 fused_ordering(684) 00:14:52.314 fused_ordering(685) 00:14:52.314 fused_ordering(686) 00:14:52.314 fused_ordering(687) 00:14:52.314 fused_ordering(688) 00:14:52.314 fused_ordering(689) 00:14:52.314 fused_ordering(690) 00:14:52.314 fused_ordering(691) 00:14:52.314 fused_ordering(692) 00:14:52.314 fused_ordering(693) 00:14:52.314 fused_ordering(694) 00:14:52.314 fused_ordering(695) 00:14:52.314 fused_ordering(696) 00:14:52.314 fused_ordering(697) 00:14:52.314 fused_ordering(698) 00:14:52.314 fused_ordering(699) 00:14:52.314 fused_ordering(700) 00:14:52.314 fused_ordering(701) 00:14:52.314 fused_ordering(702) 00:14:52.314 fused_ordering(703) 00:14:52.314 fused_ordering(704) 00:14:52.314 fused_ordering(705) 00:14:52.314 fused_ordering(706) 00:14:52.314 fused_ordering(707) 00:14:52.314 fused_ordering(708) 00:14:52.315 fused_ordering(709) 00:14:52.315 fused_ordering(710) 00:14:52.315 fused_ordering(711) 00:14:52.315 fused_ordering(712) 00:14:52.315 fused_ordering(713) 00:14:52.315 fused_ordering(714) 00:14:52.315 fused_ordering(715) 00:14:52.315 fused_ordering(716) 00:14:52.315 fused_ordering(717) 00:14:52.315 fused_ordering(718) 00:14:52.315 fused_ordering(719) 00:14:52.315 fused_ordering(720) 00:14:52.315 fused_ordering(721) 00:14:52.315 fused_ordering(722) 00:14:52.315 fused_ordering(723) 00:14:52.315 fused_ordering(724) 00:14:52.315 fused_ordering(725) 00:14:52.315 fused_ordering(726) 00:14:52.315 fused_ordering(727) 00:14:52.315 fused_ordering(728) 00:14:52.315 fused_ordering(729) 00:14:52.315 fused_ordering(730) 00:14:52.315 fused_ordering(731) 00:14:52.315 fused_ordering(732) 00:14:52.315 fused_ordering(733) 00:14:52.315 fused_ordering(734) 00:14:52.315 fused_ordering(735) 00:14:52.315 fused_ordering(736) 00:14:52.315 fused_ordering(737) 00:14:52.315 fused_ordering(738) 00:14:52.315 fused_ordering(739) 00:14:52.315 fused_ordering(740) 00:14:52.315 fused_ordering(741) 00:14:52.315 fused_ordering(742) 00:14:52.315 fused_ordering(743) 00:14:52.315 fused_ordering(744) 00:14:52.315 fused_ordering(745) 00:14:52.315 fused_ordering(746) 00:14:52.315 fused_ordering(747) 00:14:52.315 fused_ordering(748) 00:14:52.315 fused_ordering(749) 00:14:52.315 fused_ordering(750) 00:14:52.315 fused_ordering(751) 00:14:52.315 fused_ordering(752) 00:14:52.315 fused_ordering(753) 00:14:52.315 fused_ordering(754) 00:14:52.315 fused_ordering(755) 00:14:52.315 fused_ordering(756) 00:14:52.315 fused_ordering(757) 00:14:52.315 fused_ordering(758) 00:14:52.315 fused_ordering(759) 00:14:52.315 fused_ordering(760) 00:14:52.315 fused_ordering(761) 00:14:52.315 fused_ordering(762) 00:14:52.315 fused_ordering(763) 00:14:52.315 fused_ordering(764) 00:14:52.315 fused_ordering(765) 00:14:52.315 fused_ordering(766) 00:14:52.315 fused_ordering(767) 00:14:52.315 fused_ordering(768) 00:14:52.315 fused_ordering(769) 00:14:52.315 fused_ordering(770) 00:14:52.315 fused_ordering(771) 00:14:52.315 fused_ordering(772) 00:14:52.315 fused_ordering(773) 00:14:52.315 fused_ordering(774) 00:14:52.315 fused_ordering(775) 00:14:52.315 fused_ordering(776) 00:14:52.315 fused_ordering(777) 00:14:52.315 fused_ordering(778) 00:14:52.315 fused_ordering(779) 00:14:52.315 fused_ordering(780) 00:14:52.315 fused_ordering(781) 00:14:52.315 fused_ordering(782) 00:14:52.315 fused_ordering(783) 00:14:52.315 fused_ordering(784) 00:14:52.315 fused_ordering(785) 00:14:52.315 fused_ordering(786) 00:14:52.315 fused_ordering(787) 00:14:52.315 fused_ordering(788) 00:14:52.315 fused_ordering(789) 00:14:52.315 fused_ordering(790) 00:14:52.315 fused_ordering(791) 00:14:52.315 fused_ordering(792) 00:14:52.315 fused_ordering(793) 00:14:52.315 fused_ordering(794) 00:14:52.315 fused_ordering(795) 00:14:52.315 fused_ordering(796) 00:14:52.315 fused_ordering(797) 00:14:52.315 fused_ordering(798) 00:14:52.315 fused_ordering(799) 00:14:52.315 fused_ordering(800) 00:14:52.315 fused_ordering(801) 00:14:52.315 fused_ordering(802) 00:14:52.315 fused_ordering(803) 00:14:52.315 fused_ordering(804) 00:14:52.315 fused_ordering(805) 00:14:52.315 fused_ordering(806) 00:14:52.315 fused_ordering(807) 00:14:52.315 fused_ordering(808) 00:14:52.315 fused_ordering(809) 00:14:52.315 fused_ordering(810) 00:14:52.315 fused_ordering(811) 00:14:52.315 fused_ordering(812) 00:14:52.315 fused_ordering(813) 00:14:52.315 fused_ordering(814) 00:14:52.315 fused_ordering(815) 00:14:52.315 fused_ordering(816) 00:14:52.315 fused_ordering(817) 00:14:52.315 fused_ordering(818) 00:14:52.315 fused_ordering(819) 00:14:52.315 fused_ordering(820) 00:14:52.881 fused_ordering(821) 00:14:52.881 fused_ordering(822) 00:14:52.881 fused_ordering(823) 00:14:52.881 fused_ordering(824) 00:14:52.881 fused_ordering(825) 00:14:52.881 fused_ordering(826) 00:14:52.881 fused_ordering(827) 00:14:52.881 fused_ordering(828) 00:14:52.881 fused_ordering(829) 00:14:52.881 fused_ordering(830) 00:14:52.881 fused_ordering(831) 00:14:52.881 fused_ordering(832) 00:14:52.881 fused_ordering(833) 00:14:52.881 fused_ordering(834) 00:14:52.881 fused_ordering(835) 00:14:52.881 fused_ordering(836) 00:14:52.881 fused_ordering(837) 00:14:52.881 fused_ordering(838) 00:14:52.881 fused_ordering(839) 00:14:52.881 fused_ordering(840) 00:14:52.881 fused_ordering(841) 00:14:52.881 fused_ordering(842) 00:14:52.881 fused_ordering(843) 00:14:52.881 fused_ordering(844) 00:14:52.881 fused_ordering(845) 00:14:52.881 fused_ordering(846) 00:14:52.881 fused_ordering(847) 00:14:52.881 fused_ordering(848) 00:14:52.881 fused_ordering(849) 00:14:52.881 fused_ordering(850) 00:14:52.881 fused_ordering(851) 00:14:52.881 fused_ordering(852) 00:14:52.881 fused_ordering(853) 00:14:52.881 fused_ordering(854) 00:14:52.881 fused_ordering(855) 00:14:52.881 fused_ordering(856) 00:14:52.881 fused_ordering(857) 00:14:52.881 fused_ordering(858) 00:14:52.881 fused_ordering(859) 00:14:52.881 fused_ordering(860) 00:14:52.881 fused_ordering(861) 00:14:52.881 fused_ordering(862) 00:14:52.881 fused_ordering(863) 00:14:52.881 fused_ordering(864) 00:14:52.881 fused_ordering(865) 00:14:52.881 fused_ordering(866) 00:14:52.881 fused_ordering(867) 00:14:52.881 fused_ordering(868) 00:14:52.881 fused_ordering(869) 00:14:52.881 fused_ordering(870) 00:14:52.881 fused_ordering(871) 00:14:52.881 fused_ordering(872) 00:14:52.882 fused_ordering(873) 00:14:52.882 fused_ordering(874) 00:14:52.882 fused_ordering(875) 00:14:52.882 fused_ordering(876) 00:14:52.882 fused_ordering(877) 00:14:52.882 fused_ordering(878) 00:14:52.882 fused_ordering(879) 00:14:52.882 fused_ordering(880) 00:14:52.882 fused_ordering(881) 00:14:52.882 fused_ordering(882) 00:14:52.882 fused_ordering(883) 00:14:52.882 fused_ordering(884) 00:14:52.882 fused_ordering(885) 00:14:52.882 fused_ordering(886) 00:14:52.882 fused_ordering(887) 00:14:52.882 fused_ordering(888) 00:14:52.882 fused_ordering(889) 00:14:52.882 fused_ordering(890) 00:14:52.882 fused_ordering(891) 00:14:52.882 fused_ordering(892) 00:14:52.882 fused_ordering(893) 00:14:52.882 fused_ordering(894) 00:14:52.882 fused_ordering(895) 00:14:52.882 fused_ordering(896) 00:14:52.882 fused_ordering(897) 00:14:52.882 fused_ordering(898) 00:14:52.882 fused_ordering(899) 00:14:52.882 fused_ordering(900) 00:14:52.882 fused_ordering(901) 00:14:52.882 fused_ordering(902) 00:14:52.882 fused_ordering(903) 00:14:52.882 fused_ordering(904) 00:14:52.882 fused_ordering(905) 00:14:52.882 fused_ordering(906) 00:14:52.882 fused_ordering(907) 00:14:52.882 fused_ordering(908) 00:14:52.882 fused_ordering(909) 00:14:52.882 fused_ordering(910) 00:14:52.882 fused_ordering(911) 00:14:52.882 fused_ordering(912) 00:14:52.882 fused_ordering(913) 00:14:52.882 fused_ordering(914) 00:14:52.882 fused_ordering(915) 00:14:52.882 fused_ordering(916) 00:14:52.882 fused_ordering(917) 00:14:52.882 fused_ordering(918) 00:14:52.882 fused_ordering(919) 00:14:52.882 fused_ordering(920) 00:14:52.882 fused_ordering(921) 00:14:52.882 fused_ordering(922) 00:14:52.882 fused_ordering(923) 00:14:52.882 fused_ordering(924) 00:14:52.882 fused_ordering(925) 00:14:52.882 fused_ordering(926) 00:14:52.882 fused_ordering(927) 00:14:52.882 fused_ordering(928) 00:14:52.882 fused_ordering(929) 00:14:52.882 fused_ordering(930) 00:14:52.882 fused_ordering(931) 00:14:52.882 fused_ordering(932) 00:14:52.882 fused_ordering(933) 00:14:52.882 fused_ordering(934) 00:14:52.882 fused_ordering(935) 00:14:52.882 fused_ordering(936) 00:14:52.882 fused_ordering(937) 00:14:52.882 fused_ordering(938) 00:14:52.882 fused_ordering(939) 00:14:52.882 fused_ordering(940) 00:14:52.882 fused_ordering(941) 00:14:52.882 fused_ordering(942) 00:14:52.882 fused_ordering(943) 00:14:52.882 fused_ordering(944) 00:14:52.882 fused_ordering(945) 00:14:52.882 fused_ordering(946) 00:14:52.882 fused_ordering(947) 00:14:52.882 fused_ordering(948) 00:14:52.882 fused_ordering(949) 00:14:52.882 fused_ordering(950) 00:14:52.882 fused_ordering(951) 00:14:52.882 fused_ordering(952) 00:14:52.882 fused_ordering(953) 00:14:52.882 fused_ordering(954) 00:14:52.882 fused_ordering(955) 00:14:52.882 fused_ordering(956) 00:14:52.882 fused_ordering(957) 00:14:52.882 fused_ordering(958) 00:14:52.882 fused_ordering(959) 00:14:52.882 fused_ordering(960) 00:14:52.882 fused_ordering(961) 00:14:52.882 fused_ordering(962) 00:14:52.882 fused_ordering(963) 00:14:52.882 fused_ordering(964) 00:14:52.882 fused_ordering(965) 00:14:52.882 fused_ordering(966) 00:14:52.882 fused_ordering(967) 00:14:52.882 fused_ordering(968) 00:14:52.882 fused_ordering(969) 00:14:52.882 fused_ordering(970) 00:14:52.882 fused_ordering(971) 00:14:52.882 fused_ordering(972) 00:14:52.882 fused_ordering(973) 00:14:52.882 fused_ordering(974) 00:14:52.882 fused_ordering(975) 00:14:52.882 fused_ordering(976) 00:14:52.882 fused_ordering(977) 00:14:52.882 fused_ordering(978) 00:14:52.882 fused_ordering(979) 00:14:52.882 fused_ordering(980) 00:14:52.882 fused_ordering(981) 00:14:52.882 fused_ordering(982) 00:14:52.882 fused_ordering(983) 00:14:52.882 fused_ordering(984) 00:14:52.882 fused_ordering(985) 00:14:52.882 fused_ordering(986) 00:14:52.882 fused_ordering(987) 00:14:52.882 fused_ordering(988) 00:14:52.882 fused_ordering(989) 00:14:52.882 fused_ordering(990) 00:14:52.882 fused_ordering(991) 00:14:52.882 fused_ordering(992) 00:14:52.882 fused_ordering(993) 00:14:52.882 fused_ordering(994) 00:14:52.882 fused_ordering(995) 00:14:52.882 fused_ordering(996) 00:14:52.882 fused_ordering(997) 00:14:52.882 fused_ordering(998) 00:14:52.882 fused_ordering(999) 00:14:52.882 fused_ordering(1000) 00:14:52.882 fused_ordering(1001) 00:14:52.882 fused_ordering(1002) 00:14:52.882 fused_ordering(1003) 00:14:52.882 fused_ordering(1004) 00:14:52.882 fused_ordering(1005) 00:14:52.882 fused_ordering(1006) 00:14:52.882 fused_ordering(1007) 00:14:52.882 fused_ordering(1008) 00:14:52.882 fused_ordering(1009) 00:14:52.882 fused_ordering(1010) 00:14:52.882 fused_ordering(1011) 00:14:52.882 fused_ordering(1012) 00:14:52.882 fused_ordering(1013) 00:14:52.882 fused_ordering(1014) 00:14:52.882 fused_ordering(1015) 00:14:52.882 fused_ordering(1016) 00:14:52.882 fused_ordering(1017) 00:14:52.882 fused_ordering(1018) 00:14:52.882 fused_ordering(1019) 00:14:52.882 fused_ordering(1020) 00:14:52.882 fused_ordering(1021) 00:14:52.882 fused_ordering(1022) 00:14:52.882 fused_ordering(1023) 00:14:52.882 01:36:05 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:52.882 01:36:05 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:52.882 01:36:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.882 01:36:05 -- nvmf/common.sh@116 -- # sync 00:14:52.882 01:36:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.882 01:36:05 -- nvmf/common.sh@119 -- # set +e 00:14:52.882 01:36:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.882 01:36:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.882 rmmod nvme_tcp 00:14:52.882 rmmod nvme_fabrics 00:14:52.882 rmmod nvme_keyring 00:14:53.141 01:36:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.141 01:36:05 -- nvmf/common.sh@123 -- # set -e 00:14:53.141 01:36:05 -- nvmf/common.sh@124 -- # return 0 00:14:53.141 01:36:05 -- nvmf/common.sh@477 -- # '[' -n 3743408 ']' 00:14:53.141 01:36:05 -- nvmf/common.sh@478 -- # killprocess 3743408 00:14:53.141 01:36:05 -- common/autotest_common.sh@926 -- # '[' -z 3743408 ']' 00:14:53.141 01:36:05 -- common/autotest_common.sh@930 -- # kill -0 3743408 00:14:53.141 01:36:05 -- common/autotest_common.sh@931 -- # uname 00:14:53.141 01:36:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.141 01:36:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3743408 00:14:53.141 01:36:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:53.141 01:36:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:53.141 01:36:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3743408' 00:14:53.141 killing process with pid 3743408 00:14:53.141 01:36:06 -- common/autotest_common.sh@945 -- # kill 3743408 00:14:53.141 01:36:06 -- common/autotest_common.sh@950 -- # wait 3743408 00:14:53.401 01:36:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.401 01:36:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:53.401 01:36:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:53.401 01:36:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:53.401 01:36:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:53.401 01:36:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.401 01:36:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.401 01:36:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.305 01:36:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:55.305 00:14:55.305 real 0m8.872s 00:14:55.305 user 0m6.594s 00:14:55.305 sys 0m4.081s 00:14:55.305 01:36:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.305 01:36:08 -- common/autotest_common.sh@10 -- # set +x 00:14:55.305 ************************************ 00:14:55.305 END TEST nvmf_fused_ordering 00:14:55.305 ************************************ 00:14:55.305 01:36:08 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:55.305 01:36:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.305 01:36:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.305 01:36:08 -- common/autotest_common.sh@10 -- # set +x 00:14:55.305 ************************************ 00:14:55.305 START TEST nvmf_delete_subsystem 00:14:55.305 ************************************ 00:14:55.305 01:36:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:55.305 * Looking for test storage... 00:14:55.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.305 01:36:08 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.305 01:36:08 -- nvmf/common.sh@7 -- # uname -s 00:14:55.305 01:36:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.305 01:36:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.305 01:36:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.305 01:36:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.305 01:36:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.305 01:36:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.305 01:36:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.305 01:36:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.305 01:36:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.305 01:36:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.305 01:36:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.305 01:36:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.305 01:36:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.305 01:36:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.305 01:36:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.305 01:36:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.305 01:36:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.305 01:36:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.305 01:36:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.305 01:36:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.305 01:36:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.305 01:36:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.305 01:36:08 -- paths/export.sh@5 -- # export PATH 00:14:55.305 01:36:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.305 01:36:08 -- nvmf/common.sh@46 -- # : 0 00:14:55.305 01:36:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.305 01:36:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.305 01:36:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.305 01:36:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.305 01:36:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.305 01:36:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.305 01:36:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.305 01:36:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.305 01:36:08 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:55.305 01:36:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.305 01:36:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.305 01:36:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.305 01:36:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.305 01:36:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.305 01:36:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.305 01:36:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.305 01:36:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.305 01:36:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.305 01:36:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.305 01:36:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.305 01:36:08 -- common/autotest_common.sh@10 -- # set +x 00:14:57.208 01:36:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:57.208 01:36:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:57.208 01:36:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:57.208 01:36:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:57.208 01:36:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:57.208 01:36:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:57.208 01:36:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:57.208 01:36:10 -- nvmf/common.sh@294 -- # net_devs=() 00:14:57.208 01:36:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:57.208 01:36:10 -- nvmf/common.sh@295 -- # e810=() 00:14:57.208 01:36:10 -- nvmf/common.sh@295 -- # local -ga e810 00:14:57.208 01:36:10 -- nvmf/common.sh@296 -- # x722=() 00:14:57.208 01:36:10 -- nvmf/common.sh@296 -- # local -ga x722 00:14:57.208 01:36:10 -- nvmf/common.sh@297 -- # mlx=() 00:14:57.208 01:36:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:57.208 01:36:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:57.208 01:36:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:57.208 01:36:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:57.208 01:36:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.208 01:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:57.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:57.208 01:36:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:57.208 01:36:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:57.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:57.208 01:36:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.208 01:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.208 01:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.208 01:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:57.208 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:57.208 01:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.208 01:36:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:57.208 01:36:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:57.208 01:36:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:57.208 01:36:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:57.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:57.208 01:36:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:57.208 01:36:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:57.208 01:36:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:57.208 01:36:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:57.208 01:36:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.208 01:36:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.208 01:36:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:57.208 01:36:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:57.208 01:36:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:57.208 01:36:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:57.208 01:36:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:57.208 01:36:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:57.208 01:36:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.208 01:36:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:57.208 01:36:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:57.208 01:36:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:57.467 01:36:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:57.467 01:36:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:57.467 01:36:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:57.467 01:36:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:57.467 01:36:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:57.467 01:36:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:57.467 01:36:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:57.467 01:36:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:57.467 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.467 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:14:57.467 00:14:57.467 --- 10.0.0.2 ping statistics --- 00:14:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.467 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:57.467 01:36:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:57.467 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:14:57.467 00:14:57.467 --- 10.0.0.1 ping statistics --- 00:14:57.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.467 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:14:57.467 01:36:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.467 01:36:10 -- nvmf/common.sh@410 -- # return 0 00:14:57.467 01:36:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:57.467 01:36:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.467 01:36:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:57.467 01:36:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:57.467 01:36:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.467 01:36:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:57.467 01:36:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:57.467 01:36:10 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:57.467 01:36:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:57.467 01:36:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:57.467 01:36:10 -- common/autotest_common.sh@10 -- # set +x 00:14:57.467 01:36:10 -- nvmf/common.sh@469 -- # nvmfpid=3746444 00:14:57.467 01:36:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:57.467 01:36:10 -- nvmf/common.sh@470 -- # waitforlisten 3746444 00:14:57.467 01:36:10 -- common/autotest_common.sh@819 -- # '[' -z 3746444 ']' 00:14:57.467 01:36:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.467 01:36:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:57.467 01:36:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.467 01:36:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:57.467 01:36:10 -- common/autotest_common.sh@10 -- # set +x 00:14:57.467 [2024-07-23 01:36:10.502470] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:57.467 [2024-07-23 01:36:10.502558] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.467 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.726 [2024-07-23 01:36:10.566650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:57.726 [2024-07-23 01:36:10.651534] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:57.726 [2024-07-23 01:36:10.651712] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.726 [2024-07-23 01:36:10.651732] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.726 [2024-07-23 01:36:10.651744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.726 [2024-07-23 01:36:10.651807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.726 [2024-07-23 01:36:10.651812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:58.660 01:36:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.660 01:36:11 -- common/autotest_common.sh@852 -- # return 0 00:14:58.660 01:36:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:58.660 01:36:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 01:36:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 [2024-07-23 01:36:11.467691] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 [2024-07-23 01:36:11.483831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 NULL1 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 Delay0 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.660 01:36:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:58.660 01:36:11 -- common/autotest_common.sh@10 -- # set +x 00:14:58.660 01:36:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@28 -- # perf_pid=3746527 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:58.660 01:36:11 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:58.660 EAL: No free 2048 kB hugepages reported on node 1 00:14:58.660 [2024-07-23 01:36:11.558625] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:00.558 01:36:13 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.558 01:36:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.558 01:36:13 -- common/autotest_common.sh@10 -- # set +x 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 [2024-07-23 01:36:13.609125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f780c00bf20 is same with the state(5) to be set 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 [2024-07-23 01:36:13.609789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f780c000c00 is same with the state(5) to be set 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Write completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 Read completed with error (sct=0, sc=8) 00:15:00.558 starting I/O failed: -6 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 starting I/O failed: -6 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 [2024-07-23 01:36:13.610398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438820 is same with the state(5) to be set 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 Read completed with error (sct=0, sc=8) 00:15:00.559 Write completed with error (sct=0, sc=8) 00:15:00.559 [2024-07-23 01:36:13.610702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f780c00c480 is same with the state(5) to be set 00:15:01.489 [2024-07-23 01:36:14.575166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241ed70 is same with the state(5) to be set 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.764 Write completed with error (sct=0, sc=8) 00:15:01.764 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 [2024-07-23 01:36:14.612817] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24383f0 is same with the state(5) to be set 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 [2024-07-23 01:36:14.613067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2438570 is same with the state(5) to be set 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 [2024-07-23 01:36:14.613312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420230 is same with the state(5) to be set 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Read completed with error (sct=0, sc=8) 00:15:01.765 Write completed with error (sct=0, sc=8) 00:15:01.765 [2024-07-23 01:36:14.613510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f780c00c1d0 is same with the state(5) to be set 00:15:01.765 [2024-07-23 01:36:14.614484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241ed70 (9): Bad file descriptor 00:15:01.765 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:01.765 01:36:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.765 01:36:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:01.765 01:36:14 -- target/delete_subsystem.sh@35 -- # kill -0 3746527 00:15:01.765 01:36:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:01.765 Initializing NVMe Controllers 00:15:01.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.765 Controller IO queue size 128, less than required. 00:15:01.765 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:01.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:01.765 Initialization complete. Launching workers. 00:15:01.765 ======================================================== 00:15:01.765 Latency(us) 00:15:01.765 Device Information : IOPS MiB/s Average min max 00:15:01.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 186.04 0.09 994743.00 1700.18 2003697.61 00:15:01.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.79 0.08 897385.51 668.21 2003701.90 00:15:01.765 ======================================================== 00:15:01.765 Total : 340.83 0.17 950528.25 668.21 2003701.90 00:15:01.765 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@35 -- # kill -0 3746527 00:15:02.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3746527) - No such process 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@45 -- # NOT wait 3746527 00:15:02.058 01:36:15 -- common/autotest_common.sh@640 -- # local es=0 00:15:02.058 01:36:15 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3746527 00:15:02.058 01:36:15 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:02.058 01:36:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.058 01:36:15 -- common/autotest_common.sh@632 -- # type -t wait 00:15:02.058 01:36:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:02.058 01:36:15 -- common/autotest_common.sh@643 -- # wait 3746527 00:15:02.058 01:36:15 -- common/autotest_common.sh@643 -- # es=1 00:15:02.058 01:36:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:02.058 01:36:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:02.058 01:36:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:02.058 01:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.058 01:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:02.058 01:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.058 01:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.058 01:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:02.058 [2024-07-23 01:36:15.137264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.058 01:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.058 01:36:15 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:02.058 01:36:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:02.058 01:36:15 -- common/autotest_common.sh@10 -- # set +x 00:15:02.324 01:36:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:02.324 01:36:15 -- target/delete_subsystem.sh@54 -- # perf_pid=3747019 00:15:02.324 01:36:15 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:02.324 01:36:15 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:02.324 01:36:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.324 01:36:15 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:02.324 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.324 [2024-07-23 01:36:15.200110] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:02.582 01:36:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.582 01:36:15 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:02.582 01:36:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.146 01:36:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.146 01:36:16 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:03.146 01:36:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.710 01:36:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.710 01:36:16 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:03.710 01:36:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.273 01:36:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.273 01:36:17 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:04.273 01:36:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.837 01:36:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.837 01:36:17 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:04.837 01:36:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.094 01:36:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.094 01:36:18 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:05.094 01:36:18 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:05.659 Initializing NVMe Controllers 00:15:05.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:05.659 Controller IO queue size 128, less than required. 00:15:05.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:05.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:05.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:05.659 Initialization complete. Launching workers. 00:15:05.659 ======================================================== 00:15:05.659 Latency(us) 00:15:05.659 Device Information : IOPS MiB/s Average min max 00:15:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004546.34 1000211.22 1042424.03 00:15:05.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004038.86 1000216.57 1011521.46 00:15:05.659 ======================================================== 00:15:05.659 Total : 256.00 0.12 1004292.60 1000211.22 1042424.03 00:15:05.659 00:15:05.659 01:36:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.659 01:36:18 -- target/delete_subsystem.sh@57 -- # kill -0 3747019 00:15:05.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3747019) - No such process 00:15:05.659 01:36:18 -- target/delete_subsystem.sh@67 -- # wait 3747019 00:15:05.659 01:36:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:05.659 01:36:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:05.659 01:36:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.659 01:36:18 -- nvmf/common.sh@116 -- # sync 00:15:05.659 01:36:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.659 01:36:18 -- nvmf/common.sh@119 -- # set +e 00:15:05.659 01:36:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.659 01:36:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.659 rmmod nvme_tcp 00:15:05.659 rmmod nvme_fabrics 00:15:05.659 rmmod nvme_keyring 00:15:05.659 01:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.659 01:36:18 -- nvmf/common.sh@123 -- # set -e 00:15:05.659 01:36:18 -- nvmf/common.sh@124 -- # return 0 00:15:05.659 01:36:18 -- nvmf/common.sh@477 -- # '[' -n 3746444 ']' 00:15:05.659 01:36:18 -- nvmf/common.sh@478 -- # killprocess 3746444 00:15:05.659 01:36:18 -- common/autotest_common.sh@926 -- # '[' -z 3746444 ']' 00:15:05.659 01:36:18 -- common/autotest_common.sh@930 -- # kill -0 3746444 00:15:05.659 01:36:18 -- common/autotest_common.sh@931 -- # uname 00:15:05.659 01:36:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.659 01:36:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3746444 00:15:05.659 01:36:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.659 01:36:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.659 01:36:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3746444' 00:15:05.659 killing process with pid 3746444 00:15:05.659 01:36:18 -- common/autotest_common.sh@945 -- # kill 3746444 00:15:05.659 01:36:18 -- common/autotest_common.sh@950 -- # wait 3746444 00:15:05.917 01:36:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.917 01:36:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.917 01:36:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.917 01:36:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.917 01:36:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.917 01:36:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.917 01:36:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.917 01:36:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.453 01:36:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:08.453 00:15:08.453 real 0m12.687s 00:15:08.453 user 0m28.931s 00:15:08.453 sys 0m2.967s 00:15:08.453 01:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:08.453 01:36:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.453 ************************************ 00:15:08.453 END TEST nvmf_delete_subsystem 00:15:08.453 ************************************ 00:15:08.453 01:36:21 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:08.453 01:36:21 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:08.453 01:36:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:08.453 01:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:08.453 01:36:21 -- common/autotest_common.sh@10 -- # set +x 00:15:08.453 ************************************ 00:15:08.453 START TEST nvmf_nvme_cli 00:15:08.453 ************************************ 00:15:08.453 01:36:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:08.453 * Looking for test storage... 00:15:08.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.453 01:36:21 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.453 01:36:21 -- nvmf/common.sh@7 -- # uname -s 00:15:08.453 01:36:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.453 01:36:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.453 01:36:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.453 01:36:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.453 01:36:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.453 01:36:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.453 01:36:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.453 01:36:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.453 01:36:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.453 01:36:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.453 01:36:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.453 01:36:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.453 01:36:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.453 01:36:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.453 01:36:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.453 01:36:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.453 01:36:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.453 01:36:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.453 01:36:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.453 01:36:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.453 01:36:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.453 01:36:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.453 01:36:21 -- paths/export.sh@5 -- # export PATH 00:15:08.453 01:36:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.453 01:36:21 -- nvmf/common.sh@46 -- # : 0 00:15:08.453 01:36:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:08.453 01:36:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:08.453 01:36:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:08.454 01:36:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.454 01:36:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.454 01:36:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:08.454 01:36:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:08.454 01:36:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:08.454 01:36:21 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:08.454 01:36:21 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:08.454 01:36:21 -- target/nvme_cli.sh@14 -- # devs=() 00:15:08.454 01:36:21 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:08.454 01:36:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:08.454 01:36:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.454 01:36:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:08.454 01:36:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:08.454 01:36:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:08.454 01:36:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.454 01:36:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.454 01:36:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.454 01:36:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:08.454 01:36:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:08.454 01:36:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:08.454 01:36:21 -- common/autotest_common.sh@10 -- # set +x 00:15:10.358 01:36:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:10.358 01:36:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:10.358 01:36:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:10.358 01:36:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:10.359 01:36:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:10.359 01:36:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:10.359 01:36:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:10.359 01:36:23 -- nvmf/common.sh@294 -- # net_devs=() 00:15:10.359 01:36:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:10.359 01:36:23 -- nvmf/common.sh@295 -- # e810=() 00:15:10.359 01:36:23 -- nvmf/common.sh@295 -- # local -ga e810 00:15:10.359 01:36:23 -- nvmf/common.sh@296 -- # x722=() 00:15:10.359 01:36:23 -- nvmf/common.sh@296 -- # local -ga x722 00:15:10.359 01:36:23 -- nvmf/common.sh@297 -- # mlx=() 00:15:10.359 01:36:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:10.359 01:36:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.359 01:36:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.359 01:36:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:10.359 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:10.359 01:36:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:10.359 01:36:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:10.359 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:10.359 01:36:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.359 01:36:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.359 01:36:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.359 01:36:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:10.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:10.359 01:36:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:10.359 01:36:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.359 01:36:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.359 01:36:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:10.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:10.359 01:36:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:10.359 01:36:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:10.359 01:36:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.359 01:36:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.359 01:36:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:10.359 01:36:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.359 01:36:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.359 01:36:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:10.359 01:36:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.359 01:36:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.359 01:36:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:10.359 01:36:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:10.359 01:36:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.359 01:36:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.359 01:36:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.359 01:36:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.359 01:36:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:10.359 01:36:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.359 01:36:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.359 01:36:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.359 01:36:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:10.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:15:10.359 00:15:10.359 --- 10.0.0.2 ping statistics --- 00:15:10.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.359 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:10.359 01:36:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:15:10.359 00:15:10.359 --- 10.0.0.1 ping statistics --- 00:15:10.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.359 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:10.359 01:36:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.359 01:36:23 -- nvmf/common.sh@410 -- # return 0 00:15:10.359 01:36:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:10.359 01:36:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.359 01:36:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:10.359 01:36:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.359 01:36:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:10.359 01:36:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:10.359 01:36:23 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:10.359 01:36:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:10.359 01:36:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:10.359 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:10.359 01:36:23 -- nvmf/common.sh@469 -- # nvmfpid=3749382 00:15:10.359 01:36:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.359 01:36:23 -- nvmf/common.sh@470 -- # waitforlisten 3749382 00:15:10.359 01:36:23 -- common/autotest_common.sh@819 -- # '[' -z 3749382 ']' 00:15:10.359 01:36:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.359 01:36:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:10.359 01:36:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.359 01:36:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:10.359 01:36:23 -- common/autotest_common.sh@10 -- # set +x 00:15:10.359 [2024-07-23 01:36:23.204183] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:10.359 [2024-07-23 01:36:23.204278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.359 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.359 [2024-07-23 01:36:23.267275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.359 [2024-07-23 01:36:23.356434] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:10.359 [2024-07-23 01:36:23.356629] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.359 [2024-07-23 01:36:23.356650] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.359 [2024-07-23 01:36:23.356664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.359 [2024-07-23 01:36:23.356733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.359 [2024-07-23 01:36:23.356785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.359 [2024-07-23 01:36:23.356901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.359 [2024-07-23 01:36:23.356904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.291 01:36:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:11.291 01:36:24 -- common/autotest_common.sh@852 -- # return 0 00:15:11.291 01:36:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:11.291 01:36:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 01:36:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.291 01:36:24 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 [2024-07-23 01:36:24.196313] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 Malloc0 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 Malloc1 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 [2024-07-23 01:36:24.282449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:11.291 01:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:11.291 01:36:24 -- common/autotest_common.sh@10 -- # set +x 00:15:11.291 01:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:11.291 01:36:24 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:11.549 00:15:11.549 Discovery Log Number of Records 2, Generation counter 2 00:15:11.549 =====Discovery Log Entry 0====== 00:15:11.549 trtype: tcp 00:15:11.549 adrfam: ipv4 00:15:11.549 subtype: current discovery subsystem 00:15:11.549 treq: not required 00:15:11.549 portid: 0 00:15:11.549 trsvcid: 4420 00:15:11.549 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:11.549 traddr: 10.0.0.2 00:15:11.549 eflags: explicit discovery connections, duplicate discovery information 00:15:11.549 sectype: none 00:15:11.549 =====Discovery Log Entry 1====== 00:15:11.549 trtype: tcp 00:15:11.549 adrfam: ipv4 00:15:11.549 subtype: nvme subsystem 00:15:11.549 treq: not required 00:15:11.549 portid: 0 00:15:11.549 trsvcid: 4420 00:15:11.549 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:11.549 traddr: 10.0.0.2 00:15:11.549 eflags: none 00:15:11.549 sectype: none 00:15:11.549 01:36:24 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:11.549 01:36:24 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:11.549 01:36:24 -- nvmf/common.sh@510 -- # local dev _ 00:15:11.549 01:36:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.549 01:36:24 -- nvmf/common.sh@509 -- # nvme list 00:15:11.549 01:36:24 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:11.549 01:36:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.549 01:36:24 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:11.549 01:36:24 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:11.549 01:36:24 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:11.549 01:36:24 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:12.115 01:36:25 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:12.115 01:36:25 -- common/autotest_common.sh@1177 -- # local i=0 00:15:12.115 01:36:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:12.115 01:36:25 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:12.115 01:36:25 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:12.116 01:36:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:14.644 01:36:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:14.644 01:36:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:14.644 01:36:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:14.644 01:36:27 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:14.644 01:36:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:14.644 01:36:27 -- common/autotest_common.sh@1187 -- # return 0 00:15:14.644 01:36:27 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:14.644 01:36:27 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@509 -- # nvme list 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:14.644 /dev/nvme0n1 ]] 00:15:14.644 01:36:27 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:14.644 01:36:27 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:14.644 01:36:27 -- nvmf/common.sh@510 -- # local dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@509 -- # nvme list 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:14.644 01:36:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:14.644 01:36:27 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:14.644 01:36:27 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.644 01:36:27 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.644 01:36:27 -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.644 01:36:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:14.644 01:36:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.644 01:36:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:14.644 01:36:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.644 01:36:27 -- common/autotest_common.sh@1210 -- # return 0 00:15:14.644 01:36:27 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:14.644 01:36:27 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.644 01:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.644 01:36:27 -- common/autotest_common.sh@10 -- # set +x 00:15:14.644 01:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.644 01:36:27 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:14.644 01:36:27 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:14.644 01:36:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.644 01:36:27 -- nvmf/common.sh@116 -- # sync 00:15:14.644 01:36:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.644 01:36:27 -- nvmf/common.sh@119 -- # set +e 00:15:14.644 01:36:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.644 01:36:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.644 rmmod nvme_tcp 00:15:14.644 rmmod nvme_fabrics 00:15:14.644 rmmod nvme_keyring 00:15:14.644 01:36:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.644 01:36:27 -- nvmf/common.sh@123 -- # set -e 00:15:14.644 01:36:27 -- nvmf/common.sh@124 -- # return 0 00:15:14.644 01:36:27 -- nvmf/common.sh@477 -- # '[' -n 3749382 ']' 00:15:14.644 01:36:27 -- nvmf/common.sh@478 -- # killprocess 3749382 00:15:14.644 01:36:27 -- common/autotest_common.sh@926 -- # '[' -z 3749382 ']' 00:15:14.644 01:36:27 -- common/autotest_common.sh@930 -- # kill -0 3749382 00:15:14.644 01:36:27 -- common/autotest_common.sh@931 -- # uname 00:15:14.644 01:36:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.644 01:36:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3749382 00:15:14.644 01:36:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.644 01:36:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.644 01:36:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3749382' 00:15:14.644 killing process with pid 3749382 00:15:14.644 01:36:27 -- common/autotest_common.sh@945 -- # kill 3749382 00:15:14.644 01:36:27 -- common/autotest_common.sh@950 -- # wait 3749382 00:15:14.644 01:36:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.644 01:36:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.644 01:36:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.644 01:36:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.644 01:36:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.644 01:36:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.644 01:36:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.180 01:36:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:17.180 00:15:17.180 real 0m8.642s 00:15:17.180 user 0m17.680s 00:15:17.180 sys 0m2.114s 00:15:17.180 01:36:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.180 01:36:29 -- common/autotest_common.sh@10 -- # set +x 00:15:17.180 ************************************ 00:15:17.180 END TEST nvmf_nvme_cli 00:15:17.180 ************************************ 00:15:17.180 01:36:29 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:17.180 01:36:29 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.180 01:36:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.180 01:36:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.180 01:36:29 -- common/autotest_common.sh@10 -- # set +x 00:15:17.180 ************************************ 00:15:17.180 START TEST nvmf_vfio_user 00:15:17.180 ************************************ 00:15:17.180 01:36:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:17.180 * Looking for test storage... 00:15:17.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.180 01:36:29 -- nvmf/common.sh@7 -- # uname -s 00:15:17.180 01:36:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.180 01:36:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.180 01:36:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.180 01:36:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.180 01:36:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.180 01:36:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.180 01:36:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.180 01:36:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.180 01:36:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.180 01:36:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.180 01:36:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.180 01:36:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.180 01:36:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.180 01:36:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.180 01:36:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.180 01:36:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.180 01:36:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.180 01:36:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.180 01:36:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.180 01:36:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.180 01:36:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.180 01:36:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.180 01:36:29 -- paths/export.sh@5 -- # export PATH 00:15:17.180 01:36:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.180 01:36:29 -- nvmf/common.sh@46 -- # : 0 00:15:17.180 01:36:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.180 01:36:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.180 01:36:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.180 01:36:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.180 01:36:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.180 01:36:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.180 01:36:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.180 01:36:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3750327 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3750327' 00:15:17.180 Process pid: 3750327 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:17.180 01:36:29 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3750327 00:15:17.180 01:36:29 -- common/autotest_common.sh@819 -- # '[' -z 3750327 ']' 00:15:17.180 01:36:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.180 01:36:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.180 01:36:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.180 01:36:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.180 01:36:29 -- common/autotest_common.sh@10 -- # set +x 00:15:17.180 [2024-07-23 01:36:29.816011] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:17.180 [2024-07-23 01:36:29.816103] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.180 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.180 [2024-07-23 01:36:29.874142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.180 [2024-07-23 01:36:29.956773] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:17.180 [2024-07-23 01:36:29.956933] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.180 [2024-07-23 01:36:29.956950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.180 [2024-07-23 01:36:29.956963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.180 [2024-07-23 01:36:29.957025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.180 [2024-07-23 01:36:29.957081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.180 [2024-07-23 01:36:29.957147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.180 [2024-07-23 01:36:29.957149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.746 01:36:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.746 01:36:30 -- common/autotest_common.sh@852 -- # return 0 00:15:17.746 01:36:30 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:18.679 01:36:31 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:18.936 01:36:31 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:18.936 01:36:31 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:18.936 01:36:31 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:18.936 01:36:31 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:18.936 01:36:31 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.194 Malloc1 00:15:19.194 01:36:32 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.452 01:36:32 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.710 01:36:32 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.979 01:36:32 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.979 01:36:32 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.979 01:36:32 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:20.274 Malloc2 00:15:20.274 01:36:33 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:20.532 01:36:33 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.789 01:36:33 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.049 01:36:33 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:21.049 [2024-07-23 01:36:33.941437] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:21.049 [2024-07-23 01:36:33.941481] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3750768 ] 00:15:21.049 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.049 [2024-07-23 01:36:33.976798] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.049 [2024-07-23 01:36:33.985162] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.049 [2024-07-23 01:36:33.985190] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2498c7c000 00:15:21.049 [2024-07-23 01:36:33.986158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.987152] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.988155] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.989167] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.990168] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.991179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.992183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.993185] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.049 [2024-07-23 01:36:33.994192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.049 [2024-07-23 01:36:33.994212] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2497a30000 00:15:21.049 [2024-07-23 01:36:33.995325] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.049 [2024-07-23 01:36:34.011275] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.049 [2024-07-23 01:36:34.011309] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:21.049 [2024-07-23 01:36:34.016332] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.049 [2024-07-23 01:36:34.016386] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.049 [2024-07-23 01:36:34.016474] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:21.049 [2024-07-23 01:36:34.016504] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:21.049 [2024-07-23 01:36:34.016514] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:21.049 [2024-07-23 01:36:34.017324] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.049 [2024-07-23 01:36:34.017342] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:21.049 [2024-07-23 01:36:34.017355] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:21.049 [2024-07-23 01:36:34.018331] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.049 [2024-07-23 01:36:34.018348] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:21.049 [2024-07-23 01:36:34.018362] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.019333] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.049 [2024-07-23 01:36:34.019350] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.020336] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.049 [2024-07-23 01:36:34.020355] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.049 [2024-07-23 01:36:34.020364] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.020375] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.020488] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:21.049 [2024-07-23 01:36:34.020497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.020506] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.049 [2024-07-23 01:36:34.021359] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.049 [2024-07-23 01:36:34.022354] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.049 [2024-07-23 01:36:34.023362] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.049 [2024-07-23 01:36:34.024417] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.049 [2024-07-23 01:36:34.025373] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.049 [2024-07-23 01:36:34.025390] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.049 [2024-07-23 01:36:34.025399] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.049 [2024-07-23 01:36:34.025422] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:21.049 [2024-07-23 01:36:34.025435] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.049 [2024-07-23 01:36:34.025456] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.049 [2024-07-23 01:36:34.025466] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.049 [2024-07-23 01:36:34.025485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.049 [2024-07-23 01:36:34.025561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.049 [2024-07-23 01:36:34.025577] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:21.049 [2024-07-23 01:36:34.025585] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:21.049 [2024-07-23 01:36:34.025592] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:21.049 [2024-07-23 01:36:34.025600] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.049 [2024-07-23 01:36:34.025608] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:21.049 [2024-07-23 01:36:34.025638] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:21.049 [2024-07-23 01:36:34.025647] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:21.049 [2024-07-23 01:36:34.025664] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.049 [2024-07-23 01:36:34.025681] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.049 [2024-07-23 01:36:34.025704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.049 [2024-07-23 01:36:34.025725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.049 [2024-07-23 01:36:34.025738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.050 [2024-07-23 01:36:34.025750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.050 [2024-07-23 01:36:34.025762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.050 [2024-07-23 01:36:34.025770] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.025811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.025821] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:21.050 [2024-07-23 01:36:34.025829] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025840] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025854] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025868] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.025880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.025957] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025972] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.025985] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.050 [2024-07-23 01:36:34.025993] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.050 [2024-07-23 01:36:34.026003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026043] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:21.050 [2024-07-23 01:36:34.026060] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026074] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026086] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.050 [2024-07-23 01:36:34.026097] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.050 [2024-07-23 01:36:34.026107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026146] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026160] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026172] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.050 [2024-07-23 01:36:34.026180] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.050 [2024-07-23 01:36:34.026190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026227] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026241] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026250] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026259] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026267] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.050 [2024-07-23 01:36:34.026275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:21.050 [2024-07-23 01:36:34.026283] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:21.050 [2024-07-23 01:36:34.026306] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026400] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026427] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.050 [2024-07-23 01:36:34.026440] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.050 [2024-07-23 01:36:34.026446] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.050 [2024-07-23 01:36:34.026452] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.050 [2024-07-23 01:36:34.026462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.050 [2024-07-23 01:36:34.026473] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.050 [2024-07-23 01:36:34.026481] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.050 [2024-07-23 01:36:34.026490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026501] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.050 [2024-07-23 01:36:34.026509] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.050 [2024-07-23 01:36:34.026517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026529] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.050 [2024-07-23 01:36:34.026537] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.050 [2024-07-23 01:36:34.026546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.050 [2024-07-23 01:36:34.026557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.050 [2024-07-23 01:36:34.026628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.050 ===================================================== 00:15:21.050 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.050 ===================================================== 00:15:21.050 Controller Capabilities/Features 00:15:21.050 ================================ 00:15:21.050 Vendor ID: 4e58 00:15:21.050 Subsystem Vendor ID: 4e58 00:15:21.050 Serial Number: SPDK1 00:15:21.050 Model Number: SPDK bdev Controller 00:15:21.050 Firmware Version: 24.01.1 00:15:21.050 Recommended Arb Burst: 6 00:15:21.050 IEEE OUI Identifier: 8d 6b 50 00:15:21.050 Multi-path I/O 00:15:21.050 May have multiple subsystem ports: Yes 00:15:21.050 May have multiple controllers: Yes 00:15:21.050 Associated with SR-IOV VF: No 00:15:21.050 Max Data Transfer Size: 131072 00:15:21.050 Max Number of Namespaces: 32 00:15:21.050 Max Number of I/O Queues: 127 00:15:21.050 NVMe Specification Version (VS): 1.3 00:15:21.050 NVMe Specification Version (Identify): 1.3 00:15:21.050 Maximum Queue Entries: 256 00:15:21.050 Contiguous Queues Required: Yes 00:15:21.050 Arbitration Mechanisms Supported 00:15:21.051 Weighted Round Robin: Not Supported 00:15:21.051 Vendor Specific: Not Supported 00:15:21.051 Reset Timeout: 15000 ms 00:15:21.051 Doorbell Stride: 4 bytes 00:15:21.051 NVM Subsystem Reset: Not Supported 00:15:21.051 Command Sets Supported 00:15:21.051 NVM Command Set: Supported 00:15:21.051 Boot Partition: Not Supported 00:15:21.051 Memory Page Size Minimum: 4096 bytes 00:15:21.051 Memory Page Size Maximum: 4096 bytes 00:15:21.051 Persistent Memory Region: Not Supported 00:15:21.051 Optional Asynchronous Events Supported 00:15:21.051 Namespace Attribute Notices: Supported 00:15:21.051 Firmware Activation Notices: Not Supported 00:15:21.051 ANA Change Notices: Not Supported 00:15:21.051 PLE Aggregate Log Change Notices: Not Supported 00:15:21.051 LBA Status Info Alert Notices: Not Supported 00:15:21.051 EGE Aggregate Log Change Notices: Not Supported 00:15:21.051 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.051 Zone Descriptor Change Notices: Not Supported 00:15:21.051 Discovery Log Change Notices: Not Supported 00:15:21.051 Controller Attributes 00:15:21.051 128-bit Host Identifier: Supported 00:15:21.051 Non-Operational Permissive Mode: Not Supported 00:15:21.051 NVM Sets: Not Supported 00:15:21.051 Read Recovery Levels: Not Supported 00:15:21.051 Endurance Groups: Not Supported 00:15:21.051 Predictable Latency Mode: Not Supported 00:15:21.051 Traffic Based Keep ALive: Not Supported 00:15:21.051 Namespace Granularity: Not Supported 00:15:21.051 SQ Associations: Not Supported 00:15:21.051 UUID List: Not Supported 00:15:21.051 Multi-Domain Subsystem: Not Supported 00:15:21.051 Fixed Capacity Management: Not Supported 00:15:21.051 Variable Capacity Management: Not Supported 00:15:21.051 Delete Endurance Group: Not Supported 00:15:21.051 Delete NVM Set: Not Supported 00:15:21.051 Extended LBA Formats Supported: Not Supported 00:15:21.051 Flexible Data Placement Supported: Not Supported 00:15:21.051 00:15:21.051 Controller Memory Buffer Support 00:15:21.051 ================================ 00:15:21.051 Supported: No 00:15:21.051 00:15:21.051 Persistent Memory Region Support 00:15:21.051 ================================ 00:15:21.051 Supported: No 00:15:21.051 00:15:21.051 Admin Command Set Attributes 00:15:21.051 ============================ 00:15:21.051 Security Send/Receive: Not Supported 00:15:21.051 Format NVM: Not Supported 00:15:21.051 Firmware Activate/Download: Not Supported 00:15:21.051 Namespace Management: Not Supported 00:15:21.051 Device Self-Test: Not Supported 00:15:21.051 Directives: Not Supported 00:15:21.051 NVMe-MI: Not Supported 00:15:21.051 Virtualization Management: Not Supported 00:15:21.051 Doorbell Buffer Config: Not Supported 00:15:21.051 Get LBA Status Capability: Not Supported 00:15:21.051 Command & Feature Lockdown Capability: Not Supported 00:15:21.051 Abort Command Limit: 4 00:15:21.051 Async Event Request Limit: 4 00:15:21.051 Number of Firmware Slots: N/A 00:15:21.051 Firmware Slot 1 Read-Only: N/A 00:15:21.051 Firmware Activation Without Reset: N/A 00:15:21.051 Multiple Update Detection Support: N/A 00:15:21.051 Firmware Update Granularity: No Information Provided 00:15:21.051 Per-Namespace SMART Log: No 00:15:21.051 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.051 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.051 Command Effects Log Page: Supported 00:15:21.051 Get Log Page Extended Data: Supported 00:15:21.051 Telemetry Log Pages: Not Supported 00:15:21.051 Persistent Event Log Pages: Not Supported 00:15:21.051 Supported Log Pages Log Page: May Support 00:15:21.051 Commands Supported & Effects Log Page: Not Supported 00:15:21.051 Feature Identifiers & Effects Log Page:May Support 00:15:21.051 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.051 Data Area 4 for Telemetry Log: Not Supported 00:15:21.051 Error Log Page Entries Supported: 128 00:15:21.051 Keep Alive: Supported 00:15:21.051 Keep Alive Granularity: 10000 ms 00:15:21.051 00:15:21.051 NVM Command Set Attributes 00:15:21.051 ========================== 00:15:21.051 Submission Queue Entry Size 00:15:21.051 Max: 64 00:15:21.051 Min: 64 00:15:21.051 Completion Queue Entry Size 00:15:21.051 Max: 16 00:15:21.051 Min: 16 00:15:21.051 Number of Namespaces: 32 00:15:21.051 Compare Command: Supported 00:15:21.051 Write Uncorrectable Command: Not Supported 00:15:21.051 Dataset Management Command: Supported 00:15:21.051 Write Zeroes Command: Supported 00:15:21.051 Set Features Save Field: Not Supported 00:15:21.051 Reservations: Not Supported 00:15:21.051 Timestamp: Not Supported 00:15:21.051 Copy: Supported 00:15:21.051 Volatile Write Cache: Present 00:15:21.051 Atomic Write Unit (Normal): 1 00:15:21.051 Atomic Write Unit (PFail): 1 00:15:21.051 Atomic Compare & Write Unit: 1 00:15:21.051 Fused Compare & Write: Supported 00:15:21.051 Scatter-Gather List 00:15:21.051 SGL Command Set: Supported (Dword aligned) 00:15:21.051 SGL Keyed: Not Supported 00:15:21.051 SGL Bit Bucket Descriptor: Not Supported 00:15:21.051 SGL Metadata Pointer: Not Supported 00:15:21.051 Oversized SGL: Not Supported 00:15:21.051 SGL Metadata Address: Not Supported 00:15:21.051 SGL Offset: Not Supported 00:15:21.051 Transport SGL Data Block: Not Supported 00:15:21.051 Replay Protected Memory Block: Not Supported 00:15:21.051 00:15:21.051 Firmware Slot Information 00:15:21.051 ========================= 00:15:21.051 Active slot: 1 00:15:21.051 Slot 1 Firmware Revision: 24.01.1 00:15:21.051 00:15:21.051 00:15:21.051 Commands Supported and Effects 00:15:21.051 ============================== 00:15:21.051 Admin Commands 00:15:21.051 -------------- 00:15:21.051 Get Log Page (02h): Supported 00:15:21.051 Identify (06h): Supported 00:15:21.051 Abort (08h): Supported 00:15:21.051 Set Features (09h): Supported 00:15:21.051 Get Features (0Ah): Supported 00:15:21.051 Asynchronous Event Request (0Ch): Supported 00:15:21.051 Keep Alive (18h): Supported 00:15:21.051 I/O Commands 00:15:21.051 ------------ 00:15:21.051 Flush (00h): Supported LBA-Change 00:15:21.051 Write (01h): Supported LBA-Change 00:15:21.051 Read (02h): Supported 00:15:21.051 Compare (05h): Supported 00:15:21.051 Write Zeroes (08h): Supported LBA-Change 00:15:21.051 Dataset Management (09h): Supported LBA-Change 00:15:21.051 Copy (19h): Supported LBA-Change 00:15:21.051 Unknown (79h): Supported LBA-Change 00:15:21.051 Unknown (7Ah): Supported 00:15:21.051 00:15:21.051 Error Log 00:15:21.051 ========= 00:15:21.051 00:15:21.051 Arbitration 00:15:21.051 =========== 00:15:21.051 Arbitration Burst: 1 00:15:21.051 00:15:21.051 Power Management 00:15:21.051 ================ 00:15:21.051 Number of Power States: 1 00:15:21.051 Current Power State: Power State #0 00:15:21.051 Power State #0: 00:15:21.051 Max Power: 0.00 W 00:15:21.051 Non-Operational State: Operational 00:15:21.051 Entry Latency: Not Reported 00:15:21.051 Exit Latency: Not Reported 00:15:21.051 Relative Read Throughput: 0 00:15:21.051 Relative Read Latency: 0 00:15:21.051 Relative Write Throughput: 0 00:15:21.051 Relative Write Latency: 0 00:15:21.051 Idle Power: Not Reported 00:15:21.051 Active Power: Not Reported 00:15:21.051 Non-Operational Permissive Mode: Not Supported 00:15:21.051 00:15:21.051 Health Information 00:15:21.051 ================== 00:15:21.051 Critical Warnings: 00:15:21.051 Available Spare Space: OK 00:15:21.051 Temperature: OK 00:15:21.051 Device Reliability: OK 00:15:21.051 Read Only: No 00:15:21.051 Volatile Memory Backup: OK 00:15:21.051 Current Temperature: 0 Kelvin[2024-07-23 01:36:34.026756] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.051 [2024-07-23 01:36:34.026773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.051 [2024-07-23 01:36:34.026808] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:21.051 [2024-07-23 01:36:34.026826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.051 [2024-07-23 01:36:34.026837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.051 [2024-07-23 01:36:34.026847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.052 [2024-07-23 01:36:34.026857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.052 [2024-07-23 01:36:34.030625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.052 [2024-07-23 01:36:34.030647] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.052 [2024-07-23 01:36:34.031476] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:21.052 [2024-07-23 01:36:34.031490] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:21.052 [2024-07-23 01:36:34.032435] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.052 [2024-07-23 01:36:34.032457] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:21.052 [2024-07-23 01:36:34.032509] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.052 [2024-07-23 01:36:34.034481] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.052 (-273 Celsius) 00:15:21.052 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.052 Available Spare: 0% 00:15:21.052 Available Spare Threshold: 0% 00:15:21.052 Life Percentage Used: 0% 00:15:21.052 Data Units Read: 0 00:15:21.052 Data Units Written: 0 00:15:21.052 Host Read Commands: 0 00:15:21.052 Host Write Commands: 0 00:15:21.052 Controller Busy Time: 0 minutes 00:15:21.052 Power Cycles: 0 00:15:21.052 Power On Hours: 0 hours 00:15:21.052 Unsafe Shutdowns: 0 00:15:21.052 Unrecoverable Media Errors: 0 00:15:21.052 Lifetime Error Log Entries: 0 00:15:21.052 Warning Temperature Time: 0 minutes 00:15:21.052 Critical Temperature Time: 0 minutes 00:15:21.052 00:15:21.052 Number of Queues 00:15:21.052 ================ 00:15:21.052 Number of I/O Submission Queues: 127 00:15:21.052 Number of I/O Completion Queues: 127 00:15:21.052 00:15:21.052 Active Namespaces 00:15:21.052 ================= 00:15:21.052 Namespace ID:1 00:15:21.052 Error Recovery Timeout: Unlimited 00:15:21.052 Command Set Identifier: NVM (00h) 00:15:21.052 Deallocate: Supported 00:15:21.052 Deallocated/Unwritten Error: Not Supported 00:15:21.052 Deallocated Read Value: Unknown 00:15:21.052 Deallocate in Write Zeroes: Not Supported 00:15:21.052 Deallocated Guard Field: 0xFFFF 00:15:21.052 Flush: Supported 00:15:21.052 Reservation: Supported 00:15:21.052 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.052 Size (in LBAs): 131072 (0GiB) 00:15:21.052 Capacity (in LBAs): 131072 (0GiB) 00:15:21.052 Utilization (in LBAs): 131072 (0GiB) 00:15:21.052 NGUID: EED5DED3800D4FBBBE7B3BF18BA4733B 00:15:21.052 UUID: eed5ded3-800d-4fbb-be7b-3bf18ba4733b 00:15:21.052 Thin Provisioning: Not Supported 00:15:21.052 Per-NS Atomic Units: Yes 00:15:21.052 Atomic Boundary Size (Normal): 0 00:15:21.052 Atomic Boundary Size (PFail): 0 00:15:21.052 Atomic Boundary Offset: 0 00:15:21.052 Maximum Single Source Range Length: 65535 00:15:21.052 Maximum Copy Length: 65535 00:15:21.052 Maximum Source Range Count: 1 00:15:21.052 NGUID/EUI64 Never Reused: No 00:15:21.052 Namespace Write Protected: No 00:15:21.052 Number of LBA Formats: 1 00:15:21.052 Current LBA Format: LBA Format #00 00:15:21.052 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.052 00:15:21.052 01:36:34 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.052 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.316 Initializing NVMe Controllers 00:15:26.316 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:26.316 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:26.316 Initialization complete. Launching workers. 00:15:26.316 ======================================================== 00:15:26.316 Latency(us) 00:15:26.316 Device Information : IOPS MiB/s Average min max 00:15:26.316 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 37029.72 144.65 3456.06 1141.64 7378.13 00:15:26.316 ======================================================== 00:15:26.316 Total : 37029.72 144.65 3456.06 1141.64 7378.13 00:15:26.316 00:15:26.316 01:36:39 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.316 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.577 Initializing NVMe Controllers 00:15:31.577 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.577 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:31.577 Initialization complete. Launching workers. 00:15:31.577 ======================================================== 00:15:31.577 Latency(us) 00:15:31.577 Device Information : IOPS MiB/s Average min max 00:15:31.577 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16055.91 62.72 7977.40 6809.51 15140.83 00:15:31.577 ======================================================== 00:15:31.577 Total : 16055.91 62.72 7977.40 6809.51 15140.83 00:15:31.577 00:15:31.577 01:36:44 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.577 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.839 Initializing NVMe Controllers 00:15:36.839 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.839 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:36.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:36.839 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:36.839 Initialization complete. Launching workers. 00:15:36.839 Starting thread on core 2 00:15:36.839 Starting thread on core 3 00:15:36.839 Starting thread on core 1 00:15:36.839 01:36:49 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:37.097 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.379 Initializing NVMe Controllers 00:15:40.379 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.379 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.379 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:40.379 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:40.379 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:40.379 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:40.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:40.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:40.379 Initialization complete. Launching workers. 00:15:40.379 Starting thread on core 1 with urgent priority queue 00:15:40.379 Starting thread on core 2 with urgent priority queue 00:15:40.379 Starting thread on core 3 with urgent priority queue 00:15:40.379 Starting thread on core 0 with urgent priority queue 00:15:40.379 SPDK bdev Controller (SPDK1 ) core 0: 5082.33 IO/s 19.68 secs/100000 ios 00:15:40.379 SPDK bdev Controller (SPDK1 ) core 1: 5638.33 IO/s 17.74 secs/100000 ios 00:15:40.379 SPDK bdev Controller (SPDK1 ) core 2: 5907.33 IO/s 16.93 secs/100000 ios 00:15:40.379 SPDK bdev Controller (SPDK1 ) core 3: 5794.33 IO/s 17.26 secs/100000 ios 00:15:40.379 ======================================================== 00:15:40.379 00:15:40.379 01:36:53 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.379 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.636 Initializing NVMe Controllers 00:15:40.636 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.636 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.636 Namespace ID: 1 size: 0GB 00:15:40.636 Initialization complete. 00:15:40.636 INFO: using host memory buffer for IO 00:15:40.636 Hello world! 00:15:40.636 01:36:53 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.636 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.008 Initializing NVMe Controllers 00:15:42.008 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.008 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.008 Initialization complete. Launching workers. 00:15:42.008 submit (in ns) avg, min, max = 7395.5, 3442.2, 4021460.0 00:15:42.008 complete (in ns) avg, min, max = 20717.6, 2032.2, 4117000.0 00:15:42.008 00:15:42.008 Submit histogram 00:15:42.008 ================ 00:15:42.008 Range in us Cumulative Count 00:15:42.008 3.437 - 3.461: 0.2968% ( 41) 00:15:42.008 3.461 - 3.484: 1.2813% ( 136) 00:15:42.008 3.484 - 3.508: 3.0404% ( 243) 00:15:42.008 3.508 - 3.532: 7.4779% ( 613) 00:15:42.008 3.532 - 3.556: 14.0437% ( 907) 00:15:42.008 3.556 - 3.579: 22.1949% ( 1126) 00:15:42.008 3.579 - 3.603: 29.7307% ( 1041) 00:15:42.008 3.603 - 3.627: 37.8819% ( 1126) 00:15:42.008 3.627 - 3.650: 45.8014% ( 1094) 00:15:42.008 3.650 - 3.674: 52.3382% ( 903) 00:15:42.008 3.674 - 3.698: 56.7106% ( 604) 00:15:42.008 3.698 - 3.721: 60.1346% ( 473) 00:15:42.008 3.721 - 3.745: 63.1316% ( 414) 00:15:42.008 3.745 - 3.769: 66.7294% ( 497) 00:15:42.008 3.769 - 3.793: 70.6674% ( 544) 00:15:42.008 3.793 - 3.816: 74.7430% ( 563) 00:15:42.008 3.816 - 3.840: 78.4928% ( 518) 00:15:42.008 3.840 - 3.864: 82.0834% ( 496) 00:15:42.008 3.864 - 3.887: 85.0007% ( 403) 00:15:42.009 3.887 - 3.911: 87.1869% ( 302) 00:15:42.009 3.911 - 3.935: 88.5913% ( 194) 00:15:42.009 3.935 - 3.959: 89.7857% ( 165) 00:15:42.009 3.959 - 3.982: 90.8861% ( 152) 00:15:42.009 3.982 - 4.006: 91.8778% ( 137) 00:15:42.009 4.006 - 4.030: 92.7031% ( 114) 00:15:42.009 4.030 - 4.053: 93.6079% ( 125) 00:15:42.009 4.053 - 4.077: 94.3391% ( 101) 00:15:42.009 4.077 - 4.101: 94.9399% ( 83) 00:15:42.009 4.101 - 4.124: 95.4901% ( 76) 00:15:42.009 4.124 - 4.148: 95.9534% ( 64) 00:15:42.009 4.148 - 4.172: 96.2140% ( 36) 00:15:42.009 4.172 - 4.196: 96.3660% ( 21) 00:15:42.009 4.196 - 4.219: 96.5687% ( 28) 00:15:42.009 4.219 - 4.243: 96.7352% ( 23) 00:15:42.009 4.243 - 4.267: 96.8221% ( 12) 00:15:42.009 4.267 - 4.290: 96.8945% ( 10) 00:15:42.009 4.290 - 4.314: 96.9741% ( 11) 00:15:42.009 4.314 - 4.338: 97.0248% ( 7) 00:15:42.009 4.338 - 4.361: 97.0827% ( 8) 00:15:42.009 4.361 - 4.385: 97.1261% ( 6) 00:15:42.009 4.385 - 4.409: 97.1695% ( 6) 00:15:42.009 4.409 - 4.433: 97.1840% ( 2) 00:15:42.009 4.433 - 4.456: 97.1985% ( 2) 00:15:42.009 4.456 - 4.480: 97.2057% ( 1) 00:15:42.009 4.504 - 4.527: 97.2202% ( 2) 00:15:42.009 4.551 - 4.575: 97.2347% ( 2) 00:15:42.009 4.599 - 4.622: 97.2854% ( 7) 00:15:42.009 4.622 - 4.646: 97.2998% ( 2) 00:15:42.009 4.646 - 4.670: 97.3143% ( 2) 00:15:42.009 4.670 - 4.693: 97.3288% ( 2) 00:15:42.009 4.693 - 4.717: 97.3505% ( 3) 00:15:42.009 4.717 - 4.741: 97.4012% ( 7) 00:15:42.009 4.741 - 4.764: 97.4446% ( 6) 00:15:42.009 4.764 - 4.788: 97.4519% ( 1) 00:15:42.009 4.788 - 4.812: 97.4736% ( 3) 00:15:42.009 4.812 - 4.836: 97.5098% ( 5) 00:15:42.009 4.836 - 4.859: 97.5966% ( 12) 00:15:42.009 4.859 - 4.883: 97.6401% ( 6) 00:15:42.009 4.883 - 4.907: 97.7125% ( 10) 00:15:42.009 4.907 - 4.930: 97.7776% ( 9) 00:15:42.009 4.930 - 4.954: 97.8138% ( 5) 00:15:42.009 4.954 - 4.978: 97.8428% ( 4) 00:15:42.009 4.978 - 5.001: 97.8717% ( 4) 00:15:42.009 5.001 - 5.025: 97.8934% ( 3) 00:15:42.009 5.025 - 5.049: 97.9079% ( 2) 00:15:42.009 5.049 - 5.073: 97.9296% ( 3) 00:15:42.009 5.073 - 5.096: 97.9586% ( 4) 00:15:42.009 5.096 - 5.120: 97.9658% ( 1) 00:15:42.009 5.120 - 5.144: 97.9875% ( 3) 00:15:42.009 5.144 - 5.167: 98.0020% ( 2) 00:15:42.009 5.167 - 5.191: 98.0165% ( 2) 00:15:42.009 5.191 - 5.215: 98.0237% ( 1) 00:15:42.009 5.239 - 5.262: 98.0310% ( 1) 00:15:42.009 5.286 - 5.310: 98.0382% ( 1) 00:15:42.009 5.310 - 5.333: 98.0455% ( 1) 00:15:42.009 5.333 - 5.357: 98.0527% ( 1) 00:15:42.009 5.428 - 5.452: 98.0599% ( 1) 00:15:42.009 5.523 - 5.547: 98.0672% ( 1) 00:15:42.009 5.547 - 5.570: 98.0744% ( 1) 00:15:42.009 5.713 - 5.736: 98.0889% ( 2) 00:15:42.009 5.760 - 5.784: 98.0961% ( 1) 00:15:42.009 6.068 - 6.116: 98.1106% ( 2) 00:15:42.009 6.163 - 6.210: 98.1251% ( 2) 00:15:42.009 6.258 - 6.305: 98.1323% ( 1) 00:15:42.009 6.305 - 6.353: 98.1396% ( 1) 00:15:42.009 6.353 - 6.400: 98.1468% ( 1) 00:15:42.009 6.447 - 6.495: 98.1540% ( 1) 00:15:42.009 6.495 - 6.542: 98.1613% ( 1) 00:15:42.009 6.542 - 6.590: 98.1758% ( 2) 00:15:42.009 6.637 - 6.684: 98.1830% ( 1) 00:15:42.009 6.684 - 6.732: 98.1975% ( 2) 00:15:42.009 6.921 - 6.969: 98.2047% ( 1) 00:15:42.009 6.969 - 7.016: 98.2120% ( 1) 00:15:42.009 7.016 - 7.064: 98.2337% ( 3) 00:15:42.009 7.111 - 7.159: 98.2482% ( 2) 00:15:42.009 7.159 - 7.206: 98.2699% ( 3) 00:15:42.009 7.206 - 7.253: 98.2771% ( 1) 00:15:42.009 7.301 - 7.348: 98.2843% ( 1) 00:15:42.009 7.396 - 7.443: 98.2988% ( 2) 00:15:42.009 7.443 - 7.490: 98.3061% ( 1) 00:15:42.009 7.490 - 7.538: 98.3205% ( 2) 00:15:42.009 7.585 - 7.633: 98.3423% ( 3) 00:15:42.009 7.633 - 7.680: 98.3495% ( 1) 00:15:42.009 7.680 - 7.727: 98.3567% ( 1) 00:15:42.009 7.727 - 7.775: 98.3640% ( 1) 00:15:42.009 7.870 - 7.917: 98.3785% ( 2) 00:15:42.009 7.917 - 7.964: 98.3929% ( 2) 00:15:42.009 8.012 - 8.059: 98.4002% ( 1) 00:15:42.009 8.059 - 8.107: 98.4074% ( 1) 00:15:42.009 8.201 - 8.249: 98.4219% ( 2) 00:15:42.009 8.296 - 8.344: 98.4291% ( 1) 00:15:42.009 8.344 - 8.391: 98.4436% ( 2) 00:15:42.009 8.439 - 8.486: 98.4581% ( 2) 00:15:42.009 8.486 - 8.533: 98.4726% ( 2) 00:15:42.009 8.533 - 8.581: 98.4798% ( 1) 00:15:42.009 8.581 - 8.628: 98.4870% ( 1) 00:15:42.009 8.676 - 8.723: 98.4943% ( 1) 00:15:42.009 8.913 - 8.960: 98.5015% ( 1) 00:15:42.009 8.960 - 9.007: 98.5088% ( 1) 00:15:42.009 9.055 - 9.102: 98.5232% ( 2) 00:15:42.009 9.387 - 9.434: 98.5305% ( 1) 00:15:42.009 9.481 - 9.529: 98.5377% ( 1) 00:15:42.009 9.529 - 9.576: 98.5450% ( 1) 00:15:42.009 9.908 - 9.956: 98.5522% ( 1) 00:15:42.009 10.050 - 10.098: 98.5667% ( 2) 00:15:42.009 10.098 - 10.145: 98.5811% ( 2) 00:15:42.009 10.145 - 10.193: 98.5884% ( 1) 00:15:42.009 10.240 - 10.287: 98.5956% ( 1) 00:15:42.009 10.335 - 10.382: 98.6029% ( 1) 00:15:42.009 10.430 - 10.477: 98.6101% ( 1) 00:15:42.009 10.524 - 10.572: 98.6173% ( 1) 00:15:42.009 10.572 - 10.619: 98.6246% ( 1) 00:15:42.009 10.761 - 10.809: 98.6391% ( 2) 00:15:42.009 10.904 - 10.951: 98.6463% ( 1) 00:15:42.009 10.951 - 10.999: 98.6535% ( 1) 00:15:42.009 11.093 - 11.141: 98.6608% ( 1) 00:15:42.009 11.283 - 11.330: 98.6680% ( 1) 00:15:42.009 11.567 - 11.615: 98.6825% ( 2) 00:15:42.009 12.136 - 12.231: 98.6897% ( 1) 00:15:42.009 12.326 - 12.421: 98.7042% ( 2) 00:15:42.009 12.516 - 12.610: 98.7115% ( 1) 00:15:42.009 12.610 - 12.705: 98.7332% ( 3) 00:15:42.009 12.705 - 12.800: 98.7404% ( 1) 00:15:42.009 12.895 - 12.990: 98.7476% ( 1) 00:15:42.009 12.990 - 13.084: 98.7549% ( 1) 00:15:42.009 13.179 - 13.274: 98.7621% ( 1) 00:15:42.009 13.274 - 13.369: 98.7766% ( 2) 00:15:42.009 13.369 - 13.464: 98.7838% ( 1) 00:15:42.009 13.464 - 13.559: 98.8056% ( 3) 00:15:42.009 13.559 - 13.653: 98.8128% ( 1) 00:15:42.009 13.748 - 13.843: 98.8200% ( 1) 00:15:42.009 13.938 - 14.033: 98.8273% ( 1) 00:15:42.009 14.127 - 14.222: 98.8418% ( 2) 00:15:42.009 14.222 - 14.317: 98.8707% ( 4) 00:15:42.009 14.601 - 14.696: 98.8779% ( 1) 00:15:42.009 14.791 - 14.886: 98.8852% ( 1) 00:15:42.009 14.886 - 14.981: 98.8924% ( 1) 00:15:42.009 14.981 - 15.076: 98.8997% ( 1) 00:15:42.009 15.265 - 15.360: 98.9069% ( 1) 00:15:42.009 16.877 - 16.972: 98.9214% ( 2) 00:15:42.009 17.067 - 17.161: 98.9359% ( 2) 00:15:42.009 17.161 - 17.256: 98.9576% ( 3) 00:15:42.009 17.351 - 17.446: 98.9865% ( 4) 00:15:42.009 17.446 - 17.541: 99.0010% ( 2) 00:15:42.009 17.541 - 17.636: 99.0227% ( 3) 00:15:42.009 17.636 - 17.730: 99.0734% ( 7) 00:15:42.009 17.730 - 17.825: 99.1096% ( 5) 00:15:42.009 17.825 - 17.920: 99.1458% ( 5) 00:15:42.009 17.920 - 18.015: 99.1892% ( 6) 00:15:42.009 18.015 - 18.110: 99.2689% ( 11) 00:15:42.009 18.110 - 18.204: 99.3195% ( 7) 00:15:42.009 18.204 - 18.299: 99.3847% ( 9) 00:15:42.009 18.299 - 18.394: 99.4716% ( 12) 00:15:42.009 18.394 - 18.489: 99.5077% ( 5) 00:15:42.009 18.489 - 18.584: 99.5584% ( 7) 00:15:42.009 18.584 - 18.679: 99.6236% ( 9) 00:15:42.009 18.679 - 18.773: 99.6308% ( 1) 00:15:42.009 18.773 - 18.868: 99.6815% ( 7) 00:15:42.009 18.868 - 18.963: 99.7032% ( 3) 00:15:42.009 19.058 - 19.153: 99.7249% ( 3) 00:15:42.009 19.153 - 19.247: 99.7394% ( 2) 00:15:42.009 19.247 - 19.342: 99.7466% ( 1) 00:15:42.009 19.437 - 19.532: 99.7539% ( 1) 00:15:42.009 19.532 - 19.627: 99.7611% ( 1) 00:15:42.009 19.721 - 19.816: 99.7756% ( 2) 00:15:42.009 19.911 - 20.006: 99.7828% ( 1) 00:15:42.009 20.196 - 20.290: 99.7901% ( 1) 00:15:42.009 20.385 - 20.480: 99.7973% ( 1) 00:15:42.009 20.670 - 20.764: 99.8118% ( 2) 00:15:42.009 20.764 - 20.859: 99.8190% ( 1) 00:15:42.009 20.954 - 21.049: 99.8263% ( 1) 00:15:42.009 21.523 - 21.618: 99.8335% ( 1) 00:15:42.009 22.281 - 22.376: 99.8407% ( 1) 00:15:42.009 22.756 - 22.850: 99.8480% ( 1) 00:15:42.009 23.135 - 23.230: 99.8552% ( 1) 00:15:42.010 24.083 - 24.178: 99.8625% ( 1) 00:15:42.010 26.738 - 26.927: 99.8697% ( 1) 00:15:42.010 27.686 - 27.876: 99.8769% ( 1) 00:15:42.010 28.634 - 28.824: 99.8842% ( 1) 00:15:42.010 29.013 - 29.203: 99.8914% ( 1) 00:15:42.010 29.203 - 29.393: 99.8987% ( 1) 00:15:42.010 29.393 - 29.582: 99.9131% ( 2) 00:15:42.010 3980.705 - 4004.978: 99.9783% ( 9) 00:15:42.010 4004.978 - 4029.250: 100.0000% ( 3) 00:15:42.010 00:15:42.010 Complete histogram 00:15:42.010 ================== 00:15:42.010 Range in us Cumulative Count 00:15:42.010 2.027 - 2.039: 0.4705% ( 65) 00:15:42.010 2.039 - 2.050: 16.4833% ( 2212) 00:15:42.010 2.050 - 2.062: 23.4545% ( 963) 00:15:42.010 2.062 - 2.074: 30.1578% ( 926) 00:15:42.010 2.074 - 2.086: 57.2101% ( 3737) 00:15:42.010 2.086 - 2.098: 62.5814% ( 742) 00:15:42.010 2.098 - 2.110: 64.5939% ( 278) 00:15:42.010 2.110 - 2.121: 69.2703% ( 646) 00:15:42.010 2.121 - 2.133: 70.0738% ( 111) 00:15:42.010 2.133 - 2.145: 73.8671% ( 524) 00:15:42.010 2.145 - 2.157: 80.1868% ( 873) 00:15:42.010 2.157 - 2.169: 81.4970% ( 181) 00:15:42.010 2.169 - 2.181: 83.3502% ( 256) 00:15:42.010 2.181 - 2.193: 86.0214% ( 369) 00:15:42.010 2.193 - 2.204: 87.1000% ( 149) 00:15:42.010 2.204 - 2.216: 89.6047% ( 346) 00:15:42.010 2.216 - 2.228: 93.4849% ( 536) 00:15:42.010 2.228 - 2.240: 94.1726% ( 95) 00:15:42.010 2.240 - 2.252: 94.8024% ( 87) 00:15:42.010 2.252 - 2.264: 95.0557% ( 35) 00:15:42.010 2.264 - 2.276: 95.3308% ( 38) 00:15:42.010 2.276 - 2.287: 95.7073% ( 52) 00:15:42.010 2.287 - 2.299: 95.8014% ( 13) 00:15:42.010 2.299 - 2.311: 95.8955% ( 13) 00:15:42.010 2.311 - 2.323: 96.0982% ( 28) 00:15:42.010 2.323 - 2.335: 96.3805% ( 39) 00:15:42.010 2.335 - 2.347: 96.6049% ( 31) 00:15:42.010 2.347 - 2.359: 96.8945% ( 40) 00:15:42.010 2.359 - 2.370: 97.2347% ( 47) 00:15:42.010 2.370 - 2.382: 97.4157% ( 25) 00:15:42.010 2.382 - 2.394: 97.5677% ( 21) 00:15:42.010 2.394 - 2.406: 97.7559% ( 26) 00:15:42.010 2.406 - 2.418: 97.8283% ( 10) 00:15:42.010 2.418 - 2.430: 97.9369% ( 15) 00:15:42.010 2.430 - 2.441: 98.0889% ( 21) 00:15:42.010 2.441 - 2.453: 98.1396% ( 7) 00:15:42.010 2.453 - 2.465: 98.2264% ( 12) 00:15:42.010 2.465 - 2.477: 98.3205% ( 13) 00:15:42.010 2.477 - 2.489: 98.3929% ( 10) 00:15:42.010 2.489 - 2.501: 98.4364% ( 6) 00:15:42.010 2.501 - 2.513: 98.5088% ( 10) 00:15:42.010 2.513 - 2.524: 98.5232% ( 2) 00:15:42.010 2.524 - 2.536: 98.5305% ( 1) 00:15:42.010 2.536 - 2.548: 98.5377% ( 1) 00:15:42.010 2.548 - 2.560: 98.5450% ( 1) 00:15:42.010 2.560 - 2.572: 98.5522% ( 1) 00:15:42.010 2.584 - 2.596: 98.5594% ( 1) 00:15:42.010 2.596 - 2.607: 98.5667% ( 1) 00:15:42.010 2.607 - 2.619: 98.5739% ( 1) 00:15:42.010 2.643 - 2.655: 98.5811% ( 1) 00:15:42.010 2.667 - 2.679: 98.5956% ( 2) 00:15:42.010 2.679 - 2.690: 98.6029% ( 1) 00:15:42.010 2.690 - 2.702: 98.6101% ( 1) 00:15:42.010 2.714 - 2.726: 98.6173% ( 1) 00:15:42.010 2.750 - 2.761: 98.6318% ( 2) 00:15:42.010 3.153 - 3.176: 98.6391% ( 1) 00:15:42.010 3.247 - 3.271: 98.6535% ( 2) 00:15:42.010 3.271 - 3.295: 98.6825% ( 4) 00:15:42.010 3.295 - 3.319: 98.6897% ( 1) 00:15:42.010 3.319 - 3.342: 98.7187% ( 4) 00:15:42.010 3.342 - 3.366: 98.7332% ( 2) 00:15:42.010 3.390 - 3.413: 98.7621% ( 4) 00:15:42.010 3.413 - 3.437: 98.7766% ( 2) 00:15:42.010 3.437 - 3.461: 98.7911% ( 2) 00:15:42.010 3.461 - 3.484: 98.8128% ( 3) 00:15:42.010 3.484 - 3.508: 98.8200% ( 1) 00:15:42.010 3.508 - 3.532: 98.8345% ( 2) 00:15:42.010 3.532 - 3.556: 98.8418% ( 1) 00:15:42.010 3.556 - 3.579: 98.8490% ( 1) 00:15:42.010 3.627 - 3.650: 98.8562% ( 1) 00:15:42.010 3.650 - 3.674: 98.8635% ( 1) 00:15:42.010 3.674 - 3.698: 98.8852% ( 3) 00:15:42.010 3.698 - 3.721: 98.8997% ( 2) 00:15:42.010 3.721 - 3.745: 98.9069% ( 1) 00:15:42.010 3.745 - 3.769: 98.9141% ( 1) 00:15:42.010 3.793 - 3.816: 98.9214% ( 1) 00:15:42.010 3.864 - 3.887: 98.9286% ( 1) 00:15:42.010 3.887 - 3.911: 98.9359% ( 1) 00:15:42.010 3.935 - 3.959: 98.9431% ( 1) 00:15:42.010 3.982 - 4.006: 98.9503% ( 1) 00:15:42.010 4.124 - 4.148: 98.9576% ( 1) 00:15:42.010 4.859 - 4.883: 98.9648% ( 1) 00:15:42.010 4.883 - 4.907: 98.9721% ( 1) 00:15:42.010 4.907 - 4.930: 98.9793% ( 1) 00:15:42.010 5.049 - 5.073: 98.9865% ( 1) 00:15:42.010 5.262 - 5.286: 98.9938% ( 1) 00:15:42.010 5.357 - 5.381: 99.0010% ( 1) 00:15:42.010 5.926 - 5.950: 99.0083% ( 1) 00:15:42.010 5.997 - 6.021: 99.0155% ( 1) 00:15:42.010 6.068 - 6.116: 99.0227% ( 1) 00:15:42.010 6.258 - 6.305: 99.0300% ( 1) 00:15:42.010 6.305 - 6.353: 99.0372% ( 1) 00:15:42.010 6.400 - 6.447: 99.0444% ( 1) 00:15:42.010 6.447 - 6.495: 99.0517% ( 1) 00:15:42.010 6.779 - 6.827: 99.0589% ( 1) 00:15:42.010 6.921 - 6.969: 99.0662% ( 1) 00:15:42.010 7.301 - 7.348: 99.0734% ( 1) 00:15:42.010 10.003 - 10.050: 99.0806% ( 1) 00:15:42.010 12.421 - 12.516: 99.0879% ( 1) 00:15:42.010 15.455 - 15.550: 99.0951% ( 1) 00:15:42.010 15.550 - 15.644: 99.1024% ( 1) 00:15:42.010 15.644 - 15.739: 99.1241% ( 3) 00:15:42.010 15.739 - 15.834: 99.1386% ( 2) 00:15:42.010 15.834 - 15.929: 99.1603% ( 3) 00:15:42.010 15.929 - 16.024: 99.1748% ( 2) 00:15:42.010 16.024 - 16.119: 99.2037% ( 4) 00:15:42.010 16.119 - 16.213: 99.2399% ( 5) 00:15:42.010 16.213 - 16.308: 99.2616% ( 3) 00:15:42.010 16.308 - 16.403: 99.2906% ( 4) 00:15:42.010 16.403 - 16.498: 99.3195% ( 4) 00:15:42.010 16.498 - 16.593: 99.3702% ( 7) 00:15:42.010 16.593 - 16.687: 99.4064% ( 5) 00:15:42.010 16.687 - 16.782: 99.4354% ( 4) 00:15:42.010 16.782 - 16.877: 99.4498% ( 2) 00:15:42.010 16.877 - 16.972: 99.4643% ( 2) 00:15:42.010 16.972 - 17.067: 99.4788% ( 2) 00:15:42.010 17.161 - 17.256: 99.5077% ( 4) 00:15:42.010 17.636 - 17.730: 99.5150% ( 1) 00:15:42.010 17.920 - 18.015: 99.5222% ( 1) 00:15:42.010 18.299 - 18.394: 99.5295% ( 1) 00:15:42.010 18.394 - 18.489: 99.5367% ( 1) 00:15:42.010 3980.705 - 4004.978: 99.9421% ( 56) 00:15:42.010 4004.978 - 4029.250: 99.9928% ( 7) 00:15:42.010 4102.068 - 4126.341: 100.0000% ( 1) 00:15:42.010 00:15:42.010 01:36:54 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.010 01:36:54 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.010 01:36:54 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.010 01:36:54 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.010 01:36:54 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.268 [2024-07-23 01:36:55.135584] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:42.268 [ 00:15:42.268 { 00:15:42.268 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.268 "subtype": "Discovery", 00:15:42.268 "listen_addresses": [], 00:15:42.268 "allow_any_host": true, 00:15:42.268 "hosts": [] 00:15:42.268 }, 00:15:42.268 { 00:15:42.268 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.268 "subtype": "NVMe", 00:15:42.268 "listen_addresses": [ 00:15:42.268 { 00:15:42.268 "transport": "VFIOUSER", 00:15:42.268 "trtype": "VFIOUSER", 00:15:42.268 "adrfam": "IPv4", 00:15:42.268 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.268 "trsvcid": "0" 00:15:42.268 } 00:15:42.268 ], 00:15:42.268 "allow_any_host": true, 00:15:42.268 "hosts": [], 00:15:42.268 "serial_number": "SPDK1", 00:15:42.268 "model_number": "SPDK bdev Controller", 00:15:42.268 "max_namespaces": 32, 00:15:42.268 "min_cntlid": 1, 00:15:42.268 "max_cntlid": 65519, 00:15:42.268 "namespaces": [ 00:15:42.268 { 00:15:42.268 "nsid": 1, 00:15:42.268 "bdev_name": "Malloc1", 00:15:42.268 "name": "Malloc1", 00:15:42.268 "nguid": "EED5DED3800D4FBBBE7B3BF18BA4733B", 00:15:42.268 "uuid": "eed5ded3-800d-4fbb-be7b-3bf18ba4733b" 00:15:42.268 } 00:15:42.268 ] 00:15:42.268 }, 00:15:42.268 { 00:15:42.268 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.268 "subtype": "NVMe", 00:15:42.268 "listen_addresses": [ 00:15:42.268 { 00:15:42.268 "transport": "VFIOUSER", 00:15:42.268 "trtype": "VFIOUSER", 00:15:42.268 "adrfam": "IPv4", 00:15:42.268 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.268 "trsvcid": "0" 00:15:42.268 } 00:15:42.268 ], 00:15:42.268 "allow_any_host": true, 00:15:42.268 "hosts": [], 00:15:42.268 "serial_number": "SPDK2", 00:15:42.268 "model_number": "SPDK bdev Controller", 00:15:42.269 "max_namespaces": 32, 00:15:42.269 "min_cntlid": 1, 00:15:42.269 "max_cntlid": 65519, 00:15:42.269 "namespaces": [ 00:15:42.269 { 00:15:42.269 "nsid": 1, 00:15:42.269 "bdev_name": "Malloc2", 00:15:42.269 "name": "Malloc2", 00:15:42.269 "nguid": "D3E6DDFF80354499878228157CD566EA", 00:15:42.269 "uuid": "d3e6ddff-8035-4499-8782-28157cd566ea" 00:15:42.269 } 00:15:42.269 ] 00:15:42.269 } 00:15:42.269 ] 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3753363 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.269 01:36:55 -- common/autotest_common.sh@1244 -- # local i=0 00:15:42.269 01:36:55 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.269 01:36:55 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.269 01:36:55 -- common/autotest_common.sh@1255 -- # return 0 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.269 01:36:55 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:42.269 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.527 Malloc3 00:15:42.527 01:36:55 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:42.785 01:36:55 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.785 Asynchronous Event Request test 00:15:42.785 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.785 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.785 Registering asynchronous event callbacks... 00:15:42.785 Starting namespace attribute notice tests for all controllers... 00:15:42.785 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.785 aer_cb - Changed Namespace 00:15:42.785 Cleaning up... 00:15:42.785 [ 00:15:42.785 { 00:15:42.785 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.785 "subtype": "Discovery", 00:15:42.785 "listen_addresses": [], 00:15:42.785 "allow_any_host": true, 00:15:42.785 "hosts": [] 00:15:42.785 }, 00:15:42.785 { 00:15:42.785 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.785 "subtype": "NVMe", 00:15:42.785 "listen_addresses": [ 00:15:42.785 { 00:15:42.785 "transport": "VFIOUSER", 00:15:42.785 "trtype": "VFIOUSER", 00:15:42.785 "adrfam": "IPv4", 00:15:42.785 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.785 "trsvcid": "0" 00:15:42.785 } 00:15:42.785 ], 00:15:42.785 "allow_any_host": true, 00:15:42.785 "hosts": [], 00:15:42.785 "serial_number": "SPDK1", 00:15:42.785 "model_number": "SPDK bdev Controller", 00:15:42.785 "max_namespaces": 32, 00:15:42.785 "min_cntlid": 1, 00:15:42.785 "max_cntlid": 65519, 00:15:42.785 "namespaces": [ 00:15:42.785 { 00:15:42.785 "nsid": 1, 00:15:42.785 "bdev_name": "Malloc1", 00:15:42.785 "name": "Malloc1", 00:15:42.785 "nguid": "EED5DED3800D4FBBBE7B3BF18BA4733B", 00:15:42.785 "uuid": "eed5ded3-800d-4fbb-be7b-3bf18ba4733b" 00:15:42.785 }, 00:15:42.785 { 00:15:42.785 "nsid": 2, 00:15:42.785 "bdev_name": "Malloc3", 00:15:42.785 "name": "Malloc3", 00:15:42.785 "nguid": "4D23C9EF7EDA48F0BAD96B38FE91C278", 00:15:42.785 "uuid": "4d23c9ef-7eda-48f0-bad9-6b38fe91c278" 00:15:42.785 } 00:15:42.785 ] 00:15:42.785 }, 00:15:42.785 { 00:15:42.785 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.785 "subtype": "NVMe", 00:15:42.785 "listen_addresses": [ 00:15:42.785 { 00:15:42.785 "transport": "VFIOUSER", 00:15:42.785 "trtype": "VFIOUSER", 00:15:42.785 "adrfam": "IPv4", 00:15:42.785 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.785 "trsvcid": "0" 00:15:42.785 } 00:15:42.785 ], 00:15:42.785 "allow_any_host": true, 00:15:42.785 "hosts": [], 00:15:42.785 "serial_number": "SPDK2", 00:15:42.785 "model_number": "SPDK bdev Controller", 00:15:42.785 "max_namespaces": 32, 00:15:42.785 "min_cntlid": 1, 00:15:42.785 "max_cntlid": 65519, 00:15:42.785 "namespaces": [ 00:15:42.785 { 00:15:42.785 "nsid": 1, 00:15:42.785 "bdev_name": "Malloc2", 00:15:42.785 "name": "Malloc2", 00:15:42.785 "nguid": "D3E6DDFF80354499878228157CD566EA", 00:15:42.785 "uuid": "d3e6ddff-8035-4499-8782-28157cd566ea" 00:15:42.785 } 00:15:42.785 ] 00:15:42.785 } 00:15:42.785 ] 00:15:43.044 01:36:55 -- target/nvmf_vfio_user.sh@44 -- # wait 3753363 00:15:43.044 01:36:55 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:43.044 01:36:55 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.044 01:36:55 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.044 01:36:55 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:43.044 [2024-07-23 01:36:55.912722] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:43.044 [2024-07-23 01:36:55.912762] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753504 ] 00:15:43.044 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.044 [2024-07-23 01:36:55.946711] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:43.044 [2024-07-23 01:36:55.952858] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.044 [2024-07-23 01:36:55.952888] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff0f9a95000 00:15:43.044 [2024-07-23 01:36:55.953864] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.954873] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.955877] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.956880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.957882] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.958889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.961637] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.961908] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:43.044 [2024-07-23 01:36:55.962922] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:43.044 [2024-07-23 01:36:55.962958] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff0f8849000 00:15:43.044 [2024-07-23 01:36:55.964070] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.044 [2024-07-23 01:36:55.981916] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:43.044 [2024-07-23 01:36:55.981963] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:43.044 [2024-07-23 01:36:55.984031] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.044 [2024-07-23 01:36:55.984080] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:43.044 [2024-07-23 01:36:55.984160] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:43.044 [2024-07-23 01:36:55.984184] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:43.044 [2024-07-23 01:36:55.984194] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:43.044 [2024-07-23 01:36:55.985042] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:43.044 [2024-07-23 01:36:55.985061] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:43.044 [2024-07-23 01:36:55.985074] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:43.044 [2024-07-23 01:36:55.986044] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:43.044 [2024-07-23 01:36:55.986063] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:43.044 [2024-07-23 01:36:55.986077] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.987051] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:43.044 [2024-07-23 01:36:55.987070] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.988060] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:43.044 [2024-07-23 01:36:55.988079] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:43.044 [2024-07-23 01:36:55.988089] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.988100] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.988209] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:43.044 [2024-07-23 01:36:55.988217] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.988228] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:43.044 [2024-07-23 01:36:55.989062] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:43.044 [2024-07-23 01:36:55.990066] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:43.044 [2024-07-23 01:36:55.991081] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.044 [2024-07-23 01:36:55.992107] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:43.044 [2024-07-23 01:36:55.993091] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:43.044 [2024-07-23 01:36:55.993109] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:43.044 [2024-07-23 01:36:55.993118] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:43.044 [2024-07-23 01:36:55.993142] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:43.044 [2024-07-23 01:36:55.993154] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:43.044 [2024-07-23 01:36:55.993172] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.044 [2024-07-23 01:36:55.993181] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.044 [2024-07-23 01:36:55.993197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.044 [2024-07-23 01:36:55.999627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:43.044 [2024-07-23 01:36:55.999649] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:43.044 [2024-07-23 01:36:55.999673] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:43.044 [2024-07-23 01:36:55.999681] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:43.044 [2024-07-23 01:36:55.999689] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:43.044 [2024-07-23 01:36:55.999697] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:43.044 [2024-07-23 01:36:55.999705] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:43.044 [2024-07-23 01:36:55.999713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:43.044 [2024-07-23 01:36:55.999731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:43.044 [2024-07-23 01:36:55.999748] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:43.044 [2024-07-23 01:36:56.007624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.007667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.045 [2024-07-23 01:36:56.007686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.045 [2024-07-23 01:36:56.007699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.045 [2024-07-23 01:36:56.007711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:43.045 [2024-07-23 01:36:56.007720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.007735] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.007750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.015627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.015645] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:43.045 [2024-07-23 01:36:56.015654] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.015665] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.015679] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.015694] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.023625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.023698] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.023713] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.023727] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:43.045 [2024-07-23 01:36:56.023735] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:43.045 [2024-07-23 01:36:56.023745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.031626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.031657] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:43.045 [2024-07-23 01:36:56.031671] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.031686] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.031698] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.045 [2024-07-23 01:36:56.031706] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.045 [2024-07-23 01:36:56.031715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.039625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.039652] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.039667] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.039680] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.045 [2024-07-23 01:36:56.039688] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.045 [2024-07-23 01:36:56.039698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.047622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.047643] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047656] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047672] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047683] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047692] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047700] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.045 [2024-07-23 01:36:56.047708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:43.045 [2024-07-23 01:36:56.047717] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:43.045 [2024-07-23 01:36:56.047741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.055636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.055662] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.063626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.063651] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.071626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.071650] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.079625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.079650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.045 [2024-07-23 01:36:56.079660] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.045 [2024-07-23 01:36:56.079666] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.045 [2024-07-23 01:36:56.079676] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.045 [2024-07-23 01:36:56.079686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.045 [2024-07-23 01:36:56.079698] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.045 [2024-07-23 01:36:56.079706] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.045 [2024-07-23 01:36:56.079715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.079726] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.045 [2024-07-23 01:36:56.079734] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.045 [2024-07-23 01:36:56.079743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.079754] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.045 [2024-07-23 01:36:56.079762] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.045 [2024-07-23 01:36:56.079771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.045 [2024-07-23 01:36:56.087623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.087654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.087670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.045 [2024-07-23 01:36:56.087682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.045 ===================================================== 00:15:43.045 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.045 ===================================================== 00:15:43.045 Controller Capabilities/Features 00:15:43.045 ================================ 00:15:43.045 Vendor ID: 4e58 00:15:43.045 Subsystem Vendor ID: 4e58 00:15:43.045 Serial Number: SPDK2 00:15:43.045 Model Number: SPDK bdev Controller 00:15:43.045 Firmware Version: 24.01.1 00:15:43.045 Recommended Arb Burst: 6 00:15:43.045 IEEE OUI Identifier: 8d 6b 50 00:15:43.045 Multi-path I/O 00:15:43.045 May have multiple subsystem ports: Yes 00:15:43.045 May have multiple controllers: Yes 00:15:43.045 Associated with SR-IOV VF: No 00:15:43.045 Max Data Transfer Size: 131072 00:15:43.045 Max Number of Namespaces: 32 00:15:43.045 Max Number of I/O Queues: 127 00:15:43.045 NVMe Specification Version (VS): 1.3 00:15:43.045 NVMe Specification Version (Identify): 1.3 00:15:43.045 Maximum Queue Entries: 256 00:15:43.045 Contiguous Queues Required: Yes 00:15:43.045 Arbitration Mechanisms Supported 00:15:43.046 Weighted Round Robin: Not Supported 00:15:43.046 Vendor Specific: Not Supported 00:15:43.046 Reset Timeout: 15000 ms 00:15:43.046 Doorbell Stride: 4 bytes 00:15:43.046 NVM Subsystem Reset: Not Supported 00:15:43.046 Command Sets Supported 00:15:43.046 NVM Command Set: Supported 00:15:43.046 Boot Partition: Not Supported 00:15:43.046 Memory Page Size Minimum: 4096 bytes 00:15:43.046 Memory Page Size Maximum: 4096 bytes 00:15:43.046 Persistent Memory Region: Not Supported 00:15:43.046 Optional Asynchronous Events Supported 00:15:43.046 Namespace Attribute Notices: Supported 00:15:43.046 Firmware Activation Notices: Not Supported 00:15:43.046 ANA Change Notices: Not Supported 00:15:43.046 PLE Aggregate Log Change Notices: Not Supported 00:15:43.046 LBA Status Info Alert Notices: Not Supported 00:15:43.046 EGE Aggregate Log Change Notices: Not Supported 00:15:43.046 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.046 Zone Descriptor Change Notices: Not Supported 00:15:43.046 Discovery Log Change Notices: Not Supported 00:15:43.046 Controller Attributes 00:15:43.046 128-bit Host Identifier: Supported 00:15:43.046 Non-Operational Permissive Mode: Not Supported 00:15:43.046 NVM Sets: Not Supported 00:15:43.046 Read Recovery Levels: Not Supported 00:15:43.046 Endurance Groups: Not Supported 00:15:43.046 Predictable Latency Mode: Not Supported 00:15:43.046 Traffic Based Keep ALive: Not Supported 00:15:43.046 Namespace Granularity: Not Supported 00:15:43.046 SQ Associations: Not Supported 00:15:43.046 UUID List: Not Supported 00:15:43.046 Multi-Domain Subsystem: Not Supported 00:15:43.046 Fixed Capacity Management: Not Supported 00:15:43.046 Variable Capacity Management: Not Supported 00:15:43.046 Delete Endurance Group: Not Supported 00:15:43.046 Delete NVM Set: Not Supported 00:15:43.046 Extended LBA Formats Supported: Not Supported 00:15:43.046 Flexible Data Placement Supported: Not Supported 00:15:43.046 00:15:43.046 Controller Memory Buffer Support 00:15:43.046 ================================ 00:15:43.046 Supported: No 00:15:43.046 00:15:43.046 Persistent Memory Region Support 00:15:43.046 ================================ 00:15:43.046 Supported: No 00:15:43.046 00:15:43.046 Admin Command Set Attributes 00:15:43.046 ============================ 00:15:43.046 Security Send/Receive: Not Supported 00:15:43.046 Format NVM: Not Supported 00:15:43.046 Firmware Activate/Download: Not Supported 00:15:43.046 Namespace Management: Not Supported 00:15:43.046 Device Self-Test: Not Supported 00:15:43.046 Directives: Not Supported 00:15:43.046 NVMe-MI: Not Supported 00:15:43.046 Virtualization Management: Not Supported 00:15:43.046 Doorbell Buffer Config: Not Supported 00:15:43.046 Get LBA Status Capability: Not Supported 00:15:43.046 Command & Feature Lockdown Capability: Not Supported 00:15:43.046 Abort Command Limit: 4 00:15:43.046 Async Event Request Limit: 4 00:15:43.046 Number of Firmware Slots: N/A 00:15:43.046 Firmware Slot 1 Read-Only: N/A 00:15:43.046 Firmware Activation Without Reset: N/A 00:15:43.046 Multiple Update Detection Support: N/A 00:15:43.046 Firmware Update Granularity: No Information Provided 00:15:43.046 Per-Namespace SMART Log: No 00:15:43.046 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.046 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:43.046 Command Effects Log Page: Supported 00:15:43.046 Get Log Page Extended Data: Supported 00:15:43.046 Telemetry Log Pages: Not Supported 00:15:43.046 Persistent Event Log Pages: Not Supported 00:15:43.046 Supported Log Pages Log Page: May Support 00:15:43.046 Commands Supported & Effects Log Page: Not Supported 00:15:43.046 Feature Identifiers & Effects Log Page:May Support 00:15:43.046 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.046 Data Area 4 for Telemetry Log: Not Supported 00:15:43.046 Error Log Page Entries Supported: 128 00:15:43.046 Keep Alive: Supported 00:15:43.046 Keep Alive Granularity: 10000 ms 00:15:43.046 00:15:43.046 NVM Command Set Attributes 00:15:43.046 ========================== 00:15:43.046 Submission Queue Entry Size 00:15:43.046 Max: 64 00:15:43.046 Min: 64 00:15:43.046 Completion Queue Entry Size 00:15:43.046 Max: 16 00:15:43.046 Min: 16 00:15:43.046 Number of Namespaces: 32 00:15:43.046 Compare Command: Supported 00:15:43.046 Write Uncorrectable Command: Not Supported 00:15:43.046 Dataset Management Command: Supported 00:15:43.046 Write Zeroes Command: Supported 00:15:43.046 Set Features Save Field: Not Supported 00:15:43.046 Reservations: Not Supported 00:15:43.046 Timestamp: Not Supported 00:15:43.046 Copy: Supported 00:15:43.046 Volatile Write Cache: Present 00:15:43.046 Atomic Write Unit (Normal): 1 00:15:43.046 Atomic Write Unit (PFail): 1 00:15:43.046 Atomic Compare & Write Unit: 1 00:15:43.046 Fused Compare & Write: Supported 00:15:43.046 Scatter-Gather List 00:15:43.046 SGL Command Set: Supported (Dword aligned) 00:15:43.046 SGL Keyed: Not Supported 00:15:43.046 SGL Bit Bucket Descriptor: Not Supported 00:15:43.046 SGL Metadata Pointer: Not Supported 00:15:43.046 Oversized SGL: Not Supported 00:15:43.046 SGL Metadata Address: Not Supported 00:15:43.046 SGL Offset: Not Supported 00:15:43.046 Transport SGL Data Block: Not Supported 00:15:43.046 Replay Protected Memory Block: Not Supported 00:15:43.046 00:15:43.046 Firmware Slot Information 00:15:43.046 ========================= 00:15:43.046 Active slot: 1 00:15:43.046 Slot 1 Firmware Revision: 24.01.1 00:15:43.046 00:15:43.046 00:15:43.046 Commands Supported and Effects 00:15:43.046 ============================== 00:15:43.046 Admin Commands 00:15:43.046 -------------- 00:15:43.046 Get Log Page (02h): Supported 00:15:43.046 Identify (06h): Supported 00:15:43.046 Abort (08h): Supported 00:15:43.046 Set Features (09h): Supported 00:15:43.046 Get Features (0Ah): Supported 00:15:43.046 Asynchronous Event Request (0Ch): Supported 00:15:43.046 Keep Alive (18h): Supported 00:15:43.046 I/O Commands 00:15:43.046 ------------ 00:15:43.046 Flush (00h): Supported LBA-Change 00:15:43.046 Write (01h): Supported LBA-Change 00:15:43.046 Read (02h): Supported 00:15:43.046 Compare (05h): Supported 00:15:43.046 Write Zeroes (08h): Supported LBA-Change 00:15:43.046 Dataset Management (09h): Supported LBA-Change 00:15:43.046 Copy (19h): Supported LBA-Change 00:15:43.046 Unknown (79h): Supported LBA-Change 00:15:43.046 Unknown (7Ah): Supported 00:15:43.046 00:15:43.046 Error Log 00:15:43.046 ========= 00:15:43.046 00:15:43.046 Arbitration 00:15:43.046 =========== 00:15:43.046 Arbitration Burst: 1 00:15:43.046 00:15:43.046 Power Management 00:15:43.046 ================ 00:15:43.046 Number of Power States: 1 00:15:43.046 Current Power State: Power State #0 00:15:43.046 Power State #0: 00:15:43.046 Max Power: 0.00 W 00:15:43.046 Non-Operational State: Operational 00:15:43.046 Entry Latency: Not Reported 00:15:43.046 Exit Latency: Not Reported 00:15:43.046 Relative Read Throughput: 0 00:15:43.046 Relative Read Latency: 0 00:15:43.046 Relative Write Throughput: 0 00:15:43.046 Relative Write Latency: 0 00:15:43.046 Idle Power: Not Reported 00:15:43.046 Active Power: Not Reported 00:15:43.046 Non-Operational Permissive Mode: Not Supported 00:15:43.046 00:15:43.046 Health Information 00:15:43.046 ================== 00:15:43.046 Critical Warnings: 00:15:43.046 Available Spare Space: OK 00:15:43.046 Temperature: OK 00:15:43.046 Device Reliability: OK 00:15:43.046 Read Only: No 00:15:43.046 Volatile Memory Backup: OK 00:15:43.046 Current Temperature: 0 Kelvin[2024-07-23 01:36:56.087799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.046 [2024-07-23 01:36:56.095623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.046 [2024-07-23 01:36:56.095676] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:43.046 [2024-07-23 01:36:56.095693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.046 [2024-07-23 01:36:56.095704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.046 [2024-07-23 01:36:56.095714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.046 [2024-07-23 01:36:56.095723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.047 [2024-07-23 01:36:56.095787] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.047 [2024-07-23 01:36:56.095807] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:43.047 [2024-07-23 01:36:56.096835] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:43.047 [2024-07-23 01:36:56.096851] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:43.047 [2024-07-23 01:36:56.097797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:43.047 [2024-07-23 01:36:56.097824] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:43.047 [2024-07-23 01:36:56.097876] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:43.047 [2024-07-23 01:36:56.100624] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.047 (-273 Celsius) 00:15:43.047 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:43.047 Available Spare: 0% 00:15:43.047 Available Spare Threshold: 0% 00:15:43.047 Life Percentage Used: 0% 00:15:43.047 Data Units Read: 0 00:15:43.047 Data Units Written: 0 00:15:43.047 Host Read Commands: 0 00:15:43.047 Host Write Commands: 0 00:15:43.047 Controller Busy Time: 0 minutes 00:15:43.047 Power Cycles: 0 00:15:43.047 Power On Hours: 0 hours 00:15:43.047 Unsafe Shutdowns: 0 00:15:43.047 Unrecoverable Media Errors: 0 00:15:43.047 Lifetime Error Log Entries: 0 00:15:43.047 Warning Temperature Time: 0 minutes 00:15:43.047 Critical Temperature Time: 0 minutes 00:15:43.047 00:15:43.047 Number of Queues 00:15:43.047 ================ 00:15:43.047 Number of I/O Submission Queues: 127 00:15:43.047 Number of I/O Completion Queues: 127 00:15:43.047 00:15:43.047 Active Namespaces 00:15:43.047 ================= 00:15:43.047 Namespace ID:1 00:15:43.047 Error Recovery Timeout: Unlimited 00:15:43.047 Command Set Identifier: NVM (00h) 00:15:43.047 Deallocate: Supported 00:15:43.047 Deallocated/Unwritten Error: Not Supported 00:15:43.047 Deallocated Read Value: Unknown 00:15:43.047 Deallocate in Write Zeroes: Not Supported 00:15:43.047 Deallocated Guard Field: 0xFFFF 00:15:43.047 Flush: Supported 00:15:43.047 Reservation: Supported 00:15:43.047 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.047 Size (in LBAs): 131072 (0GiB) 00:15:43.047 Capacity (in LBAs): 131072 (0GiB) 00:15:43.047 Utilization (in LBAs): 131072 (0GiB) 00:15:43.047 NGUID: D3E6DDFF80354499878228157CD566EA 00:15:43.047 UUID: d3e6ddff-8035-4499-8782-28157cd566ea 00:15:43.047 Thin Provisioning: Not Supported 00:15:43.047 Per-NS Atomic Units: Yes 00:15:43.047 Atomic Boundary Size (Normal): 0 00:15:43.047 Atomic Boundary Size (PFail): 0 00:15:43.047 Atomic Boundary Offset: 0 00:15:43.047 Maximum Single Source Range Length: 65535 00:15:43.047 Maximum Copy Length: 65535 00:15:43.047 Maximum Source Range Count: 1 00:15:43.047 NGUID/EUI64 Never Reused: No 00:15:43.047 Namespace Write Protected: No 00:15:43.047 Number of LBA Formats: 1 00:15:43.047 Current LBA Format: LBA Format #00 00:15:43.047 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:43.047 00:15:43.304 01:36:56 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.304 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.613 Initializing NVMe Controllers 00:15:48.613 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:48.613 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:48.613 Initialization complete. Launching workers. 00:15:48.613 ======================================================== 00:15:48.613 Latency(us) 00:15:48.613 Device Information : IOPS MiB/s Average min max 00:15:48.613 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37465.32 146.35 3415.82 1152.76 9443.11 00:15:48.613 ======================================================== 00:15:48.613 Total : 37465.32 146.35 3415.82 1152.76 9443.11 00:15:48.613 00:15:48.613 01:37:01 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.613 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.873 Initializing NVMe Controllers 00:15:53.873 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.873 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:53.873 Initialization complete. Launching workers. 00:15:53.873 ======================================================== 00:15:53.873 Latency(us) 00:15:53.873 Device Information : IOPS MiB/s Average min max 00:15:53.873 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36561.61 142.82 3500.32 1144.25 7344.31 00:15:53.873 ======================================================== 00:15:53.873 Total : 36561.61 142.82 3500.32 1144.25 7344.31 00:15:53.873 00:15:53.873 01:37:06 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.873 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.149 Initializing NVMe Controllers 00:15:59.149 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.149 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:59.149 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:59.149 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:59.149 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:59.149 Initialization complete. Launching workers. 00:15:59.149 Starting thread on core 2 00:15:59.149 Starting thread on core 3 00:15:59.149 Starting thread on core 1 00:15:59.149 01:37:12 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:59.149 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.439 Initializing NVMe Controllers 00:16:02.439 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.439 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:02.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:02.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:02.439 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.439 Initialization complete. Launching workers. 00:16:02.439 Starting thread on core 1 with urgent priority queue 00:16:02.439 Starting thread on core 2 with urgent priority queue 00:16:02.439 Starting thread on core 3 with urgent priority queue 00:16:02.439 Starting thread on core 0 with urgent priority queue 00:16:02.439 SPDK bdev Controller (SPDK2 ) core 0: 4848.67 IO/s 20.62 secs/100000 ios 00:16:02.439 SPDK bdev Controller (SPDK2 ) core 1: 5601.00 IO/s 17.85 secs/100000 ios 00:16:02.439 SPDK bdev Controller (SPDK2 ) core 2: 5683.00 IO/s 17.60 secs/100000 ios 00:16:02.439 SPDK bdev Controller (SPDK2 ) core 3: 5703.67 IO/s 17.53 secs/100000 ios 00:16:02.439 ======================================================== 00:16:02.439 00:16:02.439 01:37:15 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.439 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.698 Initializing NVMe Controllers 00:16:02.698 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.698 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.698 Namespace ID: 1 size: 0GB 00:16:02.698 Initialization complete. 00:16:02.698 INFO: using host memory buffer for IO 00:16:02.698 Hello world! 00:16:02.698 01:37:15 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.956 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.335 Initializing NVMe Controllers 00:16:04.335 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.335 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.335 Initialization complete. Launching workers. 00:16:04.335 submit (in ns) avg, min, max = 7883.7, 3447.8, 4015277.8 00:16:04.335 complete (in ns) avg, min, max = 24431.3, 2035.6, 4191778.9 00:16:04.335 00:16:04.335 Submit histogram 00:16:04.335 ================ 00:16:04.335 Range in us Cumulative Count 00:16:04.335 3.437 - 3.461: 0.0578% ( 8) 00:16:04.335 3.461 - 3.484: 0.6067% ( 76) 00:16:04.335 3.484 - 3.508: 2.1234% ( 210) 00:16:04.335 3.508 - 3.532: 5.6334% ( 486) 00:16:04.335 3.532 - 3.556: 11.8157% ( 856) 00:16:04.335 3.556 - 3.579: 21.9414% ( 1402) 00:16:04.335 3.579 - 3.603: 31.7998% ( 1365) 00:16:04.335 3.603 - 3.627: 42.1349% ( 1431) 00:16:04.335 3.627 - 3.650: 50.4911% ( 1157) 00:16:04.335 3.650 - 3.674: 56.7095% ( 861) 00:16:04.335 3.674 - 3.698: 61.8590% ( 713) 00:16:04.335 3.698 - 3.721: 66.7774% ( 681) 00:16:04.335 3.721 - 3.745: 70.1430% ( 466) 00:16:04.335 3.745 - 3.769: 73.2630% ( 432) 00:16:04.335 3.769 - 3.793: 75.8847% ( 363) 00:16:04.335 3.793 - 3.816: 79.2142% ( 461) 00:16:04.335 3.816 - 3.840: 82.7387% ( 488) 00:16:04.335 3.840 - 3.864: 85.8948% ( 437) 00:16:04.335 3.864 - 3.887: 88.1988% ( 319) 00:16:04.335 3.887 - 3.911: 90.0116% ( 251) 00:16:04.335 3.911 - 3.935: 91.5716% ( 216) 00:16:04.335 3.935 - 3.959: 92.9655% ( 193) 00:16:04.335 3.959 - 3.982: 94.0560% ( 151) 00:16:04.335 3.982 - 4.006: 94.9227% ( 120) 00:16:04.335 4.006 - 4.030: 95.4499% ( 73) 00:16:04.335 4.030 - 4.053: 95.9483% ( 69) 00:16:04.335 4.053 - 4.077: 96.3816% ( 60) 00:16:04.335 4.077 - 4.101: 96.6922% ( 43) 00:16:04.335 4.101 - 4.124: 96.9016% ( 29) 00:16:04.335 4.124 - 4.148: 96.9955% ( 13) 00:16:04.335 4.148 - 4.172: 97.0605% ( 9) 00:16:04.335 4.172 - 4.196: 97.1689% ( 15) 00:16:04.335 4.196 - 4.219: 97.2772% ( 15) 00:16:04.335 4.219 - 4.243: 97.3422% ( 9) 00:16:04.335 4.243 - 4.267: 97.4000% ( 8) 00:16:04.335 4.267 - 4.290: 97.4577% ( 8) 00:16:04.335 4.290 - 4.314: 97.5011% ( 6) 00:16:04.335 4.314 - 4.338: 97.5372% ( 5) 00:16:04.335 4.338 - 4.361: 97.5516% ( 2) 00:16:04.335 4.361 - 4.385: 97.5661% ( 2) 00:16:04.335 4.385 - 4.409: 97.5878% ( 3) 00:16:04.335 4.409 - 4.433: 97.6022% ( 2) 00:16:04.335 4.480 - 4.504: 97.6166% ( 2) 00:16:04.335 4.504 - 4.527: 97.6239% ( 1) 00:16:04.335 4.527 - 4.551: 97.6383% ( 2) 00:16:04.335 4.551 - 4.575: 97.6600% ( 3) 00:16:04.335 4.575 - 4.599: 97.6672% ( 1) 00:16:04.335 4.599 - 4.622: 97.7105% ( 6) 00:16:04.335 4.622 - 4.646: 97.7466% ( 5) 00:16:04.335 4.646 - 4.670: 97.7900% ( 6) 00:16:04.335 4.670 - 4.693: 97.8478% ( 8) 00:16:04.335 4.693 - 4.717: 97.8983% ( 7) 00:16:04.335 4.717 - 4.741: 97.9128% ( 2) 00:16:04.335 4.741 - 4.764: 97.9561% ( 6) 00:16:04.335 4.764 - 4.788: 98.0066% ( 7) 00:16:04.335 4.788 - 4.812: 98.0428% ( 5) 00:16:04.335 4.812 - 4.836: 98.1005% ( 8) 00:16:04.335 4.836 - 4.859: 98.1655% ( 9) 00:16:04.335 4.859 - 4.883: 98.1728% ( 1) 00:16:04.335 4.883 - 4.907: 98.2016% ( 4) 00:16:04.335 4.907 - 4.930: 98.2233% ( 3) 00:16:04.335 4.930 - 4.954: 98.2594% ( 5) 00:16:04.335 4.954 - 4.978: 98.2883% ( 4) 00:16:04.335 4.978 - 5.001: 98.3100% ( 3) 00:16:04.335 5.001 - 5.025: 98.3244% ( 2) 00:16:04.335 5.025 - 5.049: 98.3605% ( 5) 00:16:04.335 5.049 - 5.073: 98.3678% ( 1) 00:16:04.335 5.073 - 5.096: 98.3822% ( 2) 00:16:04.335 5.096 - 5.120: 98.3966% ( 2) 00:16:04.335 5.120 - 5.144: 98.4039% ( 1) 00:16:04.335 5.167 - 5.191: 98.4183% ( 2) 00:16:04.335 5.452 - 5.476: 98.4255% ( 1) 00:16:04.335 5.476 - 5.499: 98.4328% ( 1) 00:16:04.335 5.523 - 5.547: 98.4400% ( 1) 00:16:04.335 5.547 - 5.570: 98.4472% ( 1) 00:16:04.335 5.570 - 5.594: 98.4544% ( 1) 00:16:04.335 5.618 - 5.641: 98.4616% ( 1) 00:16:04.335 5.736 - 5.760: 98.4689% ( 1) 00:16:04.335 5.855 - 5.879: 98.4761% ( 1) 00:16:04.335 5.926 - 5.950: 98.4833% ( 1) 00:16:04.335 6.163 - 6.210: 98.4905% ( 1) 00:16:04.335 6.258 - 6.305: 98.4978% ( 1) 00:16:04.335 6.305 - 6.353: 98.5050% ( 1) 00:16:04.335 6.353 - 6.400: 98.5122% ( 1) 00:16:04.335 6.447 - 6.495: 98.5267% ( 2) 00:16:04.335 6.495 - 6.542: 98.5339% ( 1) 00:16:04.335 6.542 - 6.590: 98.5483% ( 2) 00:16:04.335 6.590 - 6.637: 98.5555% ( 1) 00:16:04.335 6.637 - 6.684: 98.5628% ( 1) 00:16:04.335 6.732 - 6.779: 98.5700% ( 1) 00:16:04.335 6.874 - 6.921: 98.5844% ( 2) 00:16:04.335 6.921 - 6.969: 98.5989% ( 2) 00:16:04.335 6.969 - 7.016: 98.6133% ( 2) 00:16:04.335 7.016 - 7.064: 98.6350% ( 3) 00:16:04.335 7.253 - 7.301: 98.6422% ( 1) 00:16:04.335 7.301 - 7.348: 98.6639% ( 3) 00:16:04.335 7.348 - 7.396: 98.6783% ( 2) 00:16:04.335 7.396 - 7.443: 98.6855% ( 1) 00:16:04.335 7.443 - 7.490: 98.7072% ( 3) 00:16:04.335 7.490 - 7.538: 98.7217% ( 2) 00:16:04.335 7.585 - 7.633: 98.7289% ( 1) 00:16:04.335 7.680 - 7.727: 98.7361% ( 1) 00:16:04.335 7.775 - 7.822: 98.7433% ( 1) 00:16:04.336 7.822 - 7.870: 98.7505% ( 1) 00:16:04.336 7.917 - 7.964: 98.7578% ( 1) 00:16:04.336 7.964 - 8.012: 98.7794% ( 3) 00:16:04.336 8.107 - 8.154: 98.7939% ( 2) 00:16:04.336 8.154 - 8.201: 98.8083% ( 2) 00:16:04.336 8.201 - 8.249: 98.8155% ( 1) 00:16:04.336 8.249 - 8.296: 98.8372% ( 3) 00:16:04.336 8.344 - 8.391: 98.8444% ( 1) 00:16:04.336 8.439 - 8.486: 98.8517% ( 1) 00:16:04.336 8.581 - 8.628: 98.8661% ( 2) 00:16:04.336 8.628 - 8.676: 98.8733% ( 1) 00:16:04.336 8.818 - 8.865: 98.8805% ( 1) 00:16:04.336 8.960 - 9.007: 98.8950% ( 2) 00:16:04.336 9.007 - 9.055: 98.9022% ( 1) 00:16:04.336 9.055 - 9.102: 98.9167% ( 2) 00:16:04.336 9.197 - 9.244: 98.9455% ( 4) 00:16:04.336 9.387 - 9.434: 98.9600% ( 2) 00:16:04.336 10.003 - 10.050: 98.9672% ( 1) 00:16:04.336 10.193 - 10.240: 98.9744% ( 1) 00:16:04.336 10.382 - 10.430: 98.9889% ( 2) 00:16:04.336 10.761 - 10.809: 98.9961% ( 1) 00:16:04.336 10.809 - 10.856: 99.0033% ( 1) 00:16:04.336 10.856 - 10.904: 99.0105% ( 1) 00:16:04.336 10.999 - 11.046: 99.0178% ( 1) 00:16:04.336 11.046 - 11.093: 99.0250% ( 1) 00:16:04.336 11.236 - 11.283: 99.0322% ( 1) 00:16:04.336 11.378 - 11.425: 99.0394% ( 1) 00:16:04.336 11.567 - 11.615: 99.0467% ( 1) 00:16:04.336 11.899 - 11.947: 99.0611% ( 2) 00:16:04.336 11.947 - 11.994: 99.0683% ( 1) 00:16:04.336 12.136 - 12.231: 99.0755% ( 1) 00:16:04.336 12.231 - 12.326: 99.0828% ( 1) 00:16:04.336 12.516 - 12.610: 99.0900% ( 1) 00:16:04.336 12.895 - 12.990: 99.1044% ( 2) 00:16:04.336 13.084 - 13.179: 99.1117% ( 1) 00:16:04.336 13.179 - 13.274: 99.1189% ( 1) 00:16:04.336 13.274 - 13.369: 99.1261% ( 1) 00:16:04.336 13.464 - 13.559: 99.1333% ( 1) 00:16:04.336 13.653 - 13.748: 99.1405% ( 1) 00:16:04.336 13.748 - 13.843: 99.1550% ( 2) 00:16:04.336 13.843 - 13.938: 99.1622% ( 1) 00:16:04.336 14.127 - 14.222: 99.1694% ( 1) 00:16:04.336 14.222 - 14.317: 99.1839% ( 2) 00:16:04.336 14.412 - 14.507: 99.1911% ( 1) 00:16:04.336 14.507 - 14.601: 99.1983% ( 1) 00:16:04.336 14.601 - 14.696: 99.2055% ( 1) 00:16:04.336 14.791 - 14.886: 99.2128% ( 1) 00:16:04.336 16.877 - 16.972: 99.2200% ( 1) 00:16:04.336 17.067 - 17.161: 99.2272% ( 1) 00:16:04.336 17.161 - 17.256: 99.2344% ( 1) 00:16:04.336 17.256 - 17.351: 99.2705% ( 5) 00:16:04.336 17.351 - 17.446: 99.2850% ( 2) 00:16:04.336 17.446 - 17.541: 99.3067% ( 3) 00:16:04.336 17.541 - 17.636: 99.3283% ( 3) 00:16:04.336 17.636 - 17.730: 99.3717% ( 6) 00:16:04.336 17.730 - 17.825: 99.4150% ( 6) 00:16:04.336 17.825 - 17.920: 99.4294% ( 2) 00:16:04.336 17.920 - 18.015: 99.4655% ( 5) 00:16:04.336 18.015 - 18.110: 99.5161% ( 7) 00:16:04.336 18.110 - 18.204: 99.6461% ( 18) 00:16:04.336 18.204 - 18.299: 99.6894% ( 6) 00:16:04.336 18.299 - 18.394: 99.7111% ( 3) 00:16:04.336 18.394 - 18.489: 99.7472% ( 5) 00:16:04.336 18.489 - 18.584: 99.7689% ( 3) 00:16:04.336 18.584 - 18.679: 99.7978% ( 4) 00:16:04.336 18.679 - 18.773: 99.8339% ( 5) 00:16:04.336 18.773 - 18.868: 99.8483% ( 2) 00:16:04.336 19.153 - 19.247: 99.8556% ( 1) 00:16:04.336 19.247 - 19.342: 99.8628% ( 1) 00:16:04.336 21.428 - 21.523: 99.8700% ( 1) 00:16:04.336 23.704 - 23.799: 99.8772% ( 1) 00:16:04.336 24.178 - 24.273: 99.8844% ( 1) 00:16:04.336 24.841 - 25.031: 99.8917% ( 1) 00:16:04.336 28.444 - 28.634: 99.8989% ( 1) 00:16:04.336 3980.705 - 4004.978: 99.9783% ( 11) 00:16:04.336 4004.978 - 4029.250: 100.0000% ( 3) 00:16:04.336 00:16:04.336 Complete histogram 00:16:04.336 ================== 00:16:04.336 Range in us Cumulative Count 00:16:04.336 2.027 - 2.039: 0.0939% ( 13) 00:16:04.336 2.039 - 2.050: 11.6929% ( 1606) 00:16:04.336 2.050 - 2.062: 22.7286% ( 1528) 00:16:04.336 2.062 - 2.074: 26.4047% ( 509) 00:16:04.336 2.074 - 2.086: 53.1417% ( 3702) 00:16:04.336 2.086 - 2.098: 63.4118% ( 1422) 00:16:04.336 2.098 - 2.110: 65.2752% ( 258) 00:16:04.336 2.110 - 2.121: 71.0891% ( 805) 00:16:04.336 2.121 - 2.133: 72.9164% ( 253) 00:16:04.336 2.133 - 2.145: 76.4770% ( 493) 00:16:04.336 2.145 - 2.157: 86.4654% ( 1383) 00:16:04.336 2.157 - 2.169: 89.3760% ( 403) 00:16:04.336 2.169 - 2.181: 90.7843% ( 195) 00:16:04.336 2.181 - 2.193: 92.2288% ( 200) 00:16:04.336 2.193 - 2.204: 92.9583% ( 101) 00:16:04.336 2.204 - 2.216: 94.3088% ( 187) 00:16:04.336 2.216 - 2.228: 95.4788% ( 162) 00:16:04.336 2.228 - 2.240: 95.7750% ( 41) 00:16:04.336 2.240 - 2.252: 96.0061% ( 32) 00:16:04.336 2.252 - 2.264: 96.1288% ( 17) 00:16:04.336 2.264 - 2.276: 96.2444% ( 16) 00:16:04.336 2.276 - 2.287: 96.3961% ( 21) 00:16:04.336 2.287 - 2.299: 96.4538% ( 8) 00:16:04.336 2.299 - 2.311: 96.5261% ( 10) 00:16:04.336 2.311 - 2.323: 96.7066% ( 25) 00:16:04.336 2.323 - 2.335: 96.9305% ( 31) 00:16:04.336 2.335 - 2.347: 97.1400% ( 29) 00:16:04.336 2.347 - 2.359: 97.4866% ( 48) 00:16:04.336 2.359 - 2.370: 97.8839% ( 55) 00:16:04.336 2.370 - 2.382: 98.1439% ( 36) 00:16:04.336 2.382 - 2.394: 98.2955% ( 21) 00:16:04.336 2.394 - 2.406: 98.3966% ( 14) 00:16:04.336 2.406 - 2.418: 98.4328% ( 5) 00:16:04.336 2.418 - 2.430: 98.4689% ( 5) 00:16:04.336 2.430 - 2.441: 98.5122% ( 6) 00:16:04.336 2.441 - 2.453: 98.5267% ( 2) 00:16:04.336 2.453 - 2.465: 98.5411% ( 2) 00:16:04.336 2.465 - 2.477: 98.5555% ( 2) 00:16:04.336 2.501 - 2.513: 98.5628% ( 1) 00:16:04.336 2.513 - 2.524: 98.5700% ( 1) 00:16:04.336 2.584 - 2.596: 98.5844% ( 2) 00:16:04.336 2.596 - 2.607: 98.5917% ( 1) 00:16:04.336 3.224 - 3.247: 98.6133% ( 3) 00:16:04.336 3.247 - 3.271: 98.6350% ( 3) 00:16:04.336 3.271 - 3.295: 98.6494% ( 2) 00:16:04.336 3.295 - 3.319: 98.6567% ( 1) 00:16:04.336 3.319 - 3.342: 98.6711% ( 2) 00:16:04.336 3.342 - 3.366: 98.6855% ( 2) 00:16:04.336 3.366 - 3.390: 98.7072% ( 3) 00:16:04.336 3.390 - 3.413: 98.7144% ( 1) 00:16:04.336 3.437 - 3.461: 98.7289% ( 2) 00:16:04.336 3.461 - 3.484: 98.7361% ( 1) 00:16:04.336 3.508 - 3.532: 98.7433% ( 1) 00:16:04.336 3.532 - 3.556: 98.7505% ( 1) 00:16:04.336 3.556 - 3.579: 98.7650% ( 2) 00:16:04.336 3.579 - 3.603: 98.7722% ( 1) 00:16:04.336 3.627 - 3.650: 98.7867% ( 2) 00:16:04.336 3.650 - 3.674: 98.7939% ( 1) 00:16:04.336 3.674 - 3.698: 98.8155% ( 3) 00:16:04.336 4.006 - 4.030: 98.8228% ( 1) 00:16:04.336 5.001 - 5.025: 98.8300% ( 1) 00:16:04.336 5.025 - 5.049: 98.8372% ( 1) 00:16:04.336 5.262 - 5.286: 98.8444% ( 1) 00:16:04.336 5.452 - 5.476: 98.8517% ( 1) 00:16:04.336 5.476 - 5.499: 98.8589% ( 1) 00:16:04.336 5.570 - 5.594: 98.8805% ( 3) 00:16:04.336 5.594 - 5.618: 98.8878% ( 1) 00:16:04.336 5.665 - 5.689: 98.8950% ( 1) 00:16:04.336 5.689 - 5.713: 98.9022% ( 1) 00:16:04.336 5.736 - 5.760: 98.9094% ( 1) 00:16:04.336 5.760 - 5.784: 98.9167% ( 1) 00:16:04.336 5.784 - 5.807: 98.9239% ( 1) 00:16:04.336 5.973 - 5.997: 98.9311% ( 1) 00:16:04.336 6.044 - 6.068: 98.9383% ( 1) 00:16:04.336 6.116 - 6.163: 98.9455% ( 1) 00:16:04.336 6.163 - 6.210: 98.9528% ( 1) 00:16:04.336 6.305 - 6.353: 98.9600% ( 1) 00:16:04.336 6.400 - 6.447: 98.9672% ( 1) 00:16:04.336 6.684 - 6.732: 98.9744% ( 1) 00:16:04.336 7.111 - 7.159: 98.9817% ( 1) 00:16:04.336 7.775 - 7.822: 98.9889% ( 1) 00:16:04.336 15.455 - 15.550: 98.9961% ( 1) 00:16:04.336 15.550 - 15.644: 99.0033% ( 1) 00:16:04.336 15.644 - 15.739: 99.0178% ( 2) 00:16:04.336 15.739 - 15.834: 99.0322% ( 2) 00:16:04.336 15.834 - 15.929: 99.0467% ( 2) 00:16:04.336 15.929 - 16.024: 99.0828% ( 5) 00:16:04.336 16.024 - 16.119: 99.0972% ( 2) 00:16:04.336 16.119 - 16.213: 99.1405% ( 6) 00:16:04.336 16.213 - 16.308: 99.1550% ( 2) 00:16:04.336 16.308 - 16.403: 99.1767% ( 3) 00:16:04.336 16.403 - 16.498: 99.2055% ( 4) 00:16:04.336 16.498 - 16.593: 99.2272% ( 3) 00:16:04.336 16.593 - 16.687: 99.2489% ( 3) 00:16:04.336 16.687 - 16.782: 99.2778% ( 4) 00:16:04.336 16.782 - 16.877: 99.2994% ( 3) 00:16:04.336 16.877 - 16.972: 99.3283% ( 4) 00:16:04.336 16.972 - 17.067: 99.3572% ( 4) 00:16:04.336 17.067 - 17.161: 99.3789% ( 3) 00:16:04.337 17.161 - 17.256: 99.3933% ( 2) 00:16:04.337 17.351 - 17.446: 99.4078% ( 2) 00:16:04.337 17.541 - 17.636: 99.4294% ( 3) 00:16:04.337 17.920 - 18.015: 99.4367% ( 1) 00:16:04.337 18.489 - 18.584: 99.4439% ( 1) 00:16:04.337 3980.705 - 4004.978: 99.9350% ( 68) 00:16:04.337 4004.978 - 4029.250: 99.9928% ( 8) 00:16:04.337 4174.886 - 4199.159: 100.0000% ( 1) 00:16:04.337 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.337 [ 00:16:04.337 { 00:16:04.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.337 "subtype": "Discovery", 00:16:04.337 "listen_addresses": [], 00:16:04.337 "allow_any_host": true, 00:16:04.337 "hosts": [] 00:16:04.337 }, 00:16:04.337 { 00:16:04.337 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.337 "subtype": "NVMe", 00:16:04.337 "listen_addresses": [ 00:16:04.337 { 00:16:04.337 "transport": "VFIOUSER", 00:16:04.337 "trtype": "VFIOUSER", 00:16:04.337 "adrfam": "IPv4", 00:16:04.337 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.337 "trsvcid": "0" 00:16:04.337 } 00:16:04.337 ], 00:16:04.337 "allow_any_host": true, 00:16:04.337 "hosts": [], 00:16:04.337 "serial_number": "SPDK1", 00:16:04.337 "model_number": "SPDK bdev Controller", 00:16:04.337 "max_namespaces": 32, 00:16:04.337 "min_cntlid": 1, 00:16:04.337 "max_cntlid": 65519, 00:16:04.337 "namespaces": [ 00:16:04.337 { 00:16:04.337 "nsid": 1, 00:16:04.337 "bdev_name": "Malloc1", 00:16:04.337 "name": "Malloc1", 00:16:04.337 "nguid": "EED5DED3800D4FBBBE7B3BF18BA4733B", 00:16:04.337 "uuid": "eed5ded3-800d-4fbb-be7b-3bf18ba4733b" 00:16:04.337 }, 00:16:04.337 { 00:16:04.337 "nsid": 2, 00:16:04.337 "bdev_name": "Malloc3", 00:16:04.337 "name": "Malloc3", 00:16:04.337 "nguid": "4D23C9EF7EDA48F0BAD96B38FE91C278", 00:16:04.337 "uuid": "4d23c9ef-7eda-48f0-bad9-6b38fe91c278" 00:16:04.337 } 00:16:04.337 ] 00:16:04.337 }, 00:16:04.337 { 00:16:04.337 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.337 "subtype": "NVMe", 00:16:04.337 "listen_addresses": [ 00:16:04.337 { 00:16:04.337 "transport": "VFIOUSER", 00:16:04.337 "trtype": "VFIOUSER", 00:16:04.337 "adrfam": "IPv4", 00:16:04.337 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.337 "trsvcid": "0" 00:16:04.337 } 00:16:04.337 ], 00:16:04.337 "allow_any_host": true, 00:16:04.337 "hosts": [], 00:16:04.337 "serial_number": "SPDK2", 00:16:04.337 "model_number": "SPDK bdev Controller", 00:16:04.337 "max_namespaces": 32, 00:16:04.337 "min_cntlid": 1, 00:16:04.337 "max_cntlid": 65519, 00:16:04.337 "namespaces": [ 00:16:04.337 { 00:16:04.337 "nsid": 1, 00:16:04.337 "bdev_name": "Malloc2", 00:16:04.337 "name": "Malloc2", 00:16:04.337 "nguid": "D3E6DDFF80354499878228157CD566EA", 00:16:04.337 "uuid": "d3e6ddff-8035-4499-8782-28157cd566ea" 00:16:04.337 } 00:16:04.337 ] 00:16:04.337 } 00:16:04.337 ] 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@34 -- # aerpid=3756093 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.337 01:37:17 -- common/autotest_common.sh@1244 -- # local i=0 00:16:04.337 01:37:17 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.337 01:37:17 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.337 01:37:17 -- common/autotest_common.sh@1255 -- # return 0 00:16:04.337 01:37:17 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.596 01:37:17 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:04.596 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.596 Malloc4 00:16:04.854 01:37:17 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:04.854 01:37:17 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:05.114 Asynchronous Event Request test 00:16:05.114 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.114 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:05.114 Registering asynchronous event callbacks... 00:16:05.114 Starting namespace attribute notice tests for all controllers... 00:16:05.114 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:05.114 aer_cb - Changed Namespace 00:16:05.114 Cleaning up... 00:16:05.114 [ 00:16:05.114 { 00:16:05.114 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.114 "subtype": "Discovery", 00:16:05.114 "listen_addresses": [], 00:16:05.114 "allow_any_host": true, 00:16:05.114 "hosts": [] 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.114 "subtype": "NVMe", 00:16:05.114 "listen_addresses": [ 00:16:05.114 { 00:16:05.114 "transport": "VFIOUSER", 00:16:05.114 "trtype": "VFIOUSER", 00:16:05.114 "adrfam": "IPv4", 00:16:05.114 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.114 "trsvcid": "0" 00:16:05.114 } 00:16:05.114 ], 00:16:05.114 "allow_any_host": true, 00:16:05.114 "hosts": [], 00:16:05.114 "serial_number": "SPDK1", 00:16:05.114 "model_number": "SPDK bdev Controller", 00:16:05.114 "max_namespaces": 32, 00:16:05.114 "min_cntlid": 1, 00:16:05.114 "max_cntlid": 65519, 00:16:05.114 "namespaces": [ 00:16:05.114 { 00:16:05.114 "nsid": 1, 00:16:05.114 "bdev_name": "Malloc1", 00:16:05.114 "name": "Malloc1", 00:16:05.114 "nguid": "EED5DED3800D4FBBBE7B3BF18BA4733B", 00:16:05.114 "uuid": "eed5ded3-800d-4fbb-be7b-3bf18ba4733b" 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "nsid": 2, 00:16:05.114 "bdev_name": "Malloc3", 00:16:05.114 "name": "Malloc3", 00:16:05.114 "nguid": "4D23C9EF7EDA48F0BAD96B38FE91C278", 00:16:05.114 "uuid": "4d23c9ef-7eda-48f0-bad9-6b38fe91c278" 00:16:05.114 } 00:16:05.114 ] 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.114 "subtype": "NVMe", 00:16:05.114 "listen_addresses": [ 00:16:05.114 { 00:16:05.114 "transport": "VFIOUSER", 00:16:05.114 "trtype": "VFIOUSER", 00:16:05.114 "adrfam": "IPv4", 00:16:05.114 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.114 "trsvcid": "0" 00:16:05.114 } 00:16:05.114 ], 00:16:05.114 "allow_any_host": true, 00:16:05.114 "hosts": [], 00:16:05.114 "serial_number": "SPDK2", 00:16:05.114 "model_number": "SPDK bdev Controller", 00:16:05.114 "max_namespaces": 32, 00:16:05.114 "min_cntlid": 1, 00:16:05.114 "max_cntlid": 65519, 00:16:05.114 "namespaces": [ 00:16:05.114 { 00:16:05.114 "nsid": 1, 00:16:05.114 "bdev_name": "Malloc2", 00:16:05.114 "name": "Malloc2", 00:16:05.114 "nguid": "D3E6DDFF80354499878228157CD566EA", 00:16:05.114 "uuid": "d3e6ddff-8035-4499-8782-28157cd566ea" 00:16:05.114 }, 00:16:05.114 { 00:16:05.114 "nsid": 2, 00:16:05.114 "bdev_name": "Malloc4", 00:16:05.114 "name": "Malloc4", 00:16:05.114 "nguid": "12FF0DD327254B098148CC685E8329DD", 00:16:05.114 "uuid": "12ff0dd3-2725-4b09-8148-cc685e8329dd" 00:16:05.114 } 00:16:05.114 ] 00:16:05.114 } 00:16:05.114 ] 00:16:05.114 01:37:18 -- target/nvmf_vfio_user.sh@44 -- # wait 3756093 00:16:05.114 01:37:18 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:05.114 01:37:18 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3750327 00:16:05.114 01:37:18 -- common/autotest_common.sh@926 -- # '[' -z 3750327 ']' 00:16:05.114 01:37:18 -- common/autotest_common.sh@930 -- # kill -0 3750327 00:16:05.114 01:37:18 -- common/autotest_common.sh@931 -- # uname 00:16:05.114 01:37:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.114 01:37:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3750327 00:16:05.114 01:37:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.114 01:37:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.114 01:37:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3750327' 00:16:05.114 killing process with pid 3750327 00:16:05.114 01:37:18 -- common/autotest_common.sh@945 -- # kill 3750327 00:16:05.114 [2024-07-23 01:37:18.201228] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:05.114 01:37:18 -- common/autotest_common.sh@950 -- # wait 3750327 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3756239 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3756239' 00:16:05.683 Process pid: 3756239 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.683 01:37:18 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3756239 00:16:05.683 01:37:18 -- common/autotest_common.sh@819 -- # '[' -z 3756239 ']' 00:16:05.683 01:37:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.683 01:37:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.683 01:37:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.683 01:37:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.683 01:37:18 -- common/autotest_common.sh@10 -- # set +x 00:16:05.683 [2024-07-23 01:37:18.574849] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:05.683 [2024-07-23 01:37:18.575931] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.683 [2024-07-23 01:37:18.576011] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.683 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.683 [2024-07-23 01:37:18.638543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.683 [2024-07-23 01:37:18.726663] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.683 [2024-07-23 01:37:18.726807] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.683 [2024-07-23 01:37:18.726827] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.684 [2024-07-23 01:37:18.726841] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.684 [2024-07-23 01:37:18.727034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.684 [2024-07-23 01:37:18.727059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.684 [2024-07-23 01:37:18.727121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.684 [2024-07-23 01:37:18.727124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.944 [2024-07-23 01:37:18.826937] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:05.944 [2024-07-23 01:37:18.827187] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:05.944 [2024-07-23 01:37:18.827435] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:05.944 [2024-07-23 01:37:18.828174] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.944 [2024-07-23 01:37:18.828278] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:06.513 01:37:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.513 01:37:19 -- common/autotest_common.sh@852 -- # return 0 00:16:06.513 01:37:19 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:07.453 01:37:20 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:07.711 01:37:20 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:07.711 01:37:20 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:07.711 01:37:20 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.711 01:37:20 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:07.711 01:37:20 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:07.969 Malloc1 00:16:07.969 01:37:21 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:08.228 01:37:21 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:08.486 01:37:21 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:08.782 01:37:21 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.782 01:37:21 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:08.782 01:37:21 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.042 Malloc2 00:16:09.042 01:37:22 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:09.300 01:37:22 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:09.558 01:37:22 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:09.818 01:37:22 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:09.818 01:37:22 -- target/nvmf_vfio_user.sh@95 -- # killprocess 3756239 00:16:09.818 01:37:22 -- common/autotest_common.sh@926 -- # '[' -z 3756239 ']' 00:16:09.818 01:37:22 -- common/autotest_common.sh@930 -- # kill -0 3756239 00:16:09.818 01:37:22 -- common/autotest_common.sh@931 -- # uname 00:16:09.818 01:37:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.818 01:37:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3756239 00:16:09.818 01:37:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.818 01:37:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.818 01:37:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3756239' 00:16:09.818 killing process with pid 3756239 00:16:09.818 01:37:22 -- common/autotest_common.sh@945 -- # kill 3756239 00:16:09.818 01:37:22 -- common/autotest_common.sh@950 -- # wait 3756239 00:16:10.076 01:37:23 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:10.076 01:37:23 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:10.076 00:16:10.076 real 0m53.333s 00:16:10.076 user 3m30.807s 00:16:10.076 sys 0m4.490s 00:16:10.076 01:37:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.076 01:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 ************************************ 00:16:10.076 END TEST nvmf_vfio_user 00:16:10.076 ************************************ 00:16:10.076 01:37:23 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.076 01:37:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:10.076 01:37:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.076 01:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 ************************************ 00:16:10.076 START TEST nvmf_vfio_user_nvme_compliance 00:16:10.076 ************************************ 00:16:10.076 01:37:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.076 * Looking for test storage... 00:16:10.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:10.076 01:37:23 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.076 01:37:23 -- nvmf/common.sh@7 -- # uname -s 00:16:10.076 01:37:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.076 01:37:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.076 01:37:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.076 01:37:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.076 01:37:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.076 01:37:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.076 01:37:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.076 01:37:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.076 01:37:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.076 01:37:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.076 01:37:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.076 01:37:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.076 01:37:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.076 01:37:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.076 01:37:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.076 01:37:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.076 01:37:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.076 01:37:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.076 01:37:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.076 01:37:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.076 01:37:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.076 01:37:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.076 01:37:23 -- paths/export.sh@5 -- # export PATH 00:16:10.076 01:37:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.076 01:37:23 -- nvmf/common.sh@46 -- # : 0 00:16:10.076 01:37:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.076 01:37:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.076 01:37:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.076 01:37:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.076 01:37:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.076 01:37:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.076 01:37:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.076 01:37:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.076 01:37:23 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.076 01:37:23 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.076 01:37:23 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.076 01:37:23 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.076 01:37:23 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:10.076 01:37:23 -- compliance/compliance.sh@20 -- # nvmfpid=3756861 00:16:10.076 01:37:23 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:10.076 01:37:23 -- compliance/compliance.sh@21 -- # echo 'Process pid: 3756861' 00:16:10.076 Process pid: 3756861 00:16:10.076 01:37:23 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.076 01:37:23 -- compliance/compliance.sh@24 -- # waitforlisten 3756861 00:16:10.076 01:37:23 -- common/autotest_common.sh@819 -- # '[' -z 3756861 ']' 00:16:10.076 01:37:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.076 01:37:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.076 01:37:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.076 01:37:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.076 01:37:23 -- common/autotest_common.sh@10 -- # set +x 00:16:10.076 [2024-07-23 01:37:23.173909] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:10.076 [2024-07-23 01:37:23.173993] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.334 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.334 [2024-07-23 01:37:23.232237] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.334 [2024-07-23 01:37:23.314464] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.334 [2024-07-23 01:37:23.314619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.334 [2024-07-23 01:37:23.314638] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.334 [2024-07-23 01:37:23.314651] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.334 [2024-07-23 01:37:23.314701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.334 [2024-07-23 01:37:23.316631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.334 [2024-07-23 01:37:23.316641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.269 01:37:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.269 01:37:24 -- common/autotest_common.sh@852 -- # return 0 00:16:11.269 01:37:24 -- compliance/compliance.sh@26 -- # sleep 1 00:16:12.206 01:37:25 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:12.206 01:37:25 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:12.206 01:37:25 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.206 01:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.206 01:37:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.206 01:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.206 01:37:25 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:12.206 01:37:25 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.206 01:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.206 01:37:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.206 malloc0 00:16:12.206 01:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.206 01:37:25 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:12.206 01:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.206 01:37:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.206 01:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.206 01:37:25 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.206 01:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.206 01:37:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.206 01:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.206 01:37:25 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.206 01:37:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.206 01:37:25 -- common/autotest_common.sh@10 -- # set +x 00:16:12.206 01:37:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.206 01:37:25 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:12.206 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.206 00:16:12.206 00:16:12.206 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.206 http://cunit.sourceforge.net/ 00:16:12.206 00:16:12.206 00:16:12.206 Suite: nvme_compliance 00:16:12.466 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-23 01:37:25.342209] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:12.466 [2024-07-23 01:37:25.342254] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:12.466 [2024-07-23 01:37:25.342267] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:12.466 passed 00:16:12.466 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:12.725 Test: admin_identify_ns ...[2024-07-23 01:37:25.578633] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:12.725 [2024-07-23 01:37:25.586632] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:12.725 passed 00:16:12.725 Test: admin_get_features_mandatory_features ...passed 00:16:12.725 Test: admin_get_features_optional_features ...passed 00:16:12.984 Test: admin_set_features_number_of_queues ...passed 00:16:13.241 Test: admin_get_log_page_mandatory_logs ...passed 00:16:13.241 Test: admin_get_log_page_with_lpo ...[2024-07-23 01:37:26.205647] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:13.241 passed 00:16:13.241 Test: fabric_property_get ...passed 00:16:13.499 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-23 01:37:26.389205] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:13.499 passed 00:16:13.499 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-23 01:37:26.557636] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.499 [2024-07-23 01:37:26.573622] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.757 passed 00:16:13.757 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-23 01:37:26.664662] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:13.757 passed 00:16:13.757 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-23 01:37:26.822626] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:13.757 [2024-07-23 01:37:26.845623] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.016 passed 00:16:14.016 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-23 01:37:26.937681] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.016 [2024-07-23 01:37:26.937726] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.016 passed 00:16:14.275 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-23 01:37:27.119652] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:14.275 [2024-07-23 01:37:27.127621] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:14.275 [2024-07-23 01:37:27.135623] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:14.275 [2024-07-23 01:37:27.143622] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:14.275 passed 00:16:14.275 Test: admin_create_io_sq_verify_pc ...[2024-07-23 01:37:27.272655] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:14.275 passed 00:16:15.656 Test: admin_create_io_qp_max_qps ...[2024-07-23 01:37:28.469630] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:15.916 passed 00:16:16.174 Test: admin_create_io_sq_shared_cq ...[2024-07-23 01:37:29.069628] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.174 passed 00:16:16.174 00:16:16.174 Run Summary: Type Total Ran Passed Failed Inactive 00:16:16.174 suites 1 1 n/a 0 0 00:16:16.174 tests 18 18 18 0 0 00:16:16.174 asserts 360 360 360 0 n/a 00:16:16.174 00:16:16.174 Elapsed time = 1.563 seconds 00:16:16.174 01:37:29 -- compliance/compliance.sh@42 -- # killprocess 3756861 00:16:16.174 01:37:29 -- common/autotest_common.sh@926 -- # '[' -z 3756861 ']' 00:16:16.174 01:37:29 -- common/autotest_common.sh@930 -- # kill -0 3756861 00:16:16.174 01:37:29 -- common/autotest_common.sh@931 -- # uname 00:16:16.174 01:37:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.174 01:37:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3756861 00:16:16.174 01:37:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.174 01:37:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.174 01:37:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3756861' 00:16:16.174 killing process with pid 3756861 00:16:16.174 01:37:29 -- common/autotest_common.sh@945 -- # kill 3756861 00:16:16.174 01:37:29 -- common/autotest_common.sh@950 -- # wait 3756861 00:16:16.432 01:37:29 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:16.432 01:37:29 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:16.432 00:16:16.432 real 0m6.355s 00:16:16.432 user 0m18.230s 00:16:16.432 sys 0m0.571s 00:16:16.432 01:37:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.432 01:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 ************************************ 00:16:16.432 END TEST nvmf_vfio_user_nvme_compliance 00:16:16.432 ************************************ 00:16:16.432 01:37:29 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.432 01:37:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:16.432 01:37:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.432 01:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:16.432 ************************************ 00:16:16.432 START TEST nvmf_vfio_user_fuzz 00:16:16.432 ************************************ 00:16:16.432 01:37:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.432 * Looking for test storage... 00:16:16.432 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.432 01:37:29 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.432 01:37:29 -- nvmf/common.sh@7 -- # uname -s 00:16:16.432 01:37:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.432 01:37:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.432 01:37:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.432 01:37:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.432 01:37:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.432 01:37:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.432 01:37:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.432 01:37:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.432 01:37:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.432 01:37:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.432 01:37:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.432 01:37:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.432 01:37:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.432 01:37:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.432 01:37:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.432 01:37:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.432 01:37:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.432 01:37:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.432 01:37:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.433 01:37:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.433 01:37:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.433 01:37:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.433 01:37:29 -- paths/export.sh@5 -- # export PATH 00:16:16.433 01:37:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.433 01:37:29 -- nvmf/common.sh@46 -- # : 0 00:16:16.433 01:37:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.433 01:37:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.433 01:37:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.433 01:37:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.433 01:37:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.433 01:37:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.433 01:37:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.433 01:37:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3757612 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3757612' 00:16:16.433 Process pid: 3757612 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:16.433 01:37:29 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3757612 00:16:16.433 01:37:29 -- common/autotest_common.sh@819 -- # '[' -z 3757612 ']' 00:16:16.433 01:37:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.433 01:37:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.433 01:37:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.433 01:37:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.433 01:37:29 -- common/autotest_common.sh@10 -- # set +x 00:16:17.814 01:37:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.814 01:37:30 -- common/autotest_common.sh@852 -- # return 0 00:16:17.814 01:37:30 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:18.750 01:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.750 01:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 01:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:18.750 01:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.750 01:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 malloc0 00:16:18.750 01:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:18.750 01:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.750 01:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 01:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:18.750 01:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.750 01:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 01:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:18.750 01:37:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.750 01:37:31 -- common/autotest_common.sh@10 -- # set +x 00:16:18.750 01:37:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:18.750 01:37:31 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:50.820 Fuzzing completed. Shutting down the fuzz application 00:16:50.820 00:16:50.820 Dumping successful admin opcodes: 00:16:50.820 8, 9, 10, 24, 00:16:50.820 Dumping successful io opcodes: 00:16:50.820 0, 00:16:50.820 NS: 0x200003a1ef00 I/O qp, Total commands completed: 576905, total successful commands: 2219, random_seed: 1136097920 00:16:50.820 NS: 0x200003a1ef00 admin qp, Total commands completed: 142895, total successful commands: 1162, random_seed: 561579712 00:16:50.820 01:38:02 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:50.820 01:38:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:50.820 01:38:02 -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 01:38:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:50.820 01:38:02 -- target/vfio_user_fuzz.sh@46 -- # killprocess 3757612 00:16:50.820 01:38:02 -- common/autotest_common.sh@926 -- # '[' -z 3757612 ']' 00:16:50.820 01:38:02 -- common/autotest_common.sh@930 -- # kill -0 3757612 00:16:50.820 01:38:02 -- common/autotest_common.sh@931 -- # uname 00:16:50.820 01:38:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:50.820 01:38:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3757612 00:16:50.820 01:38:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:50.820 01:38:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:50.820 01:38:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3757612' 00:16:50.820 killing process with pid 3757612 00:16:50.820 01:38:02 -- common/autotest_common.sh@945 -- # kill 3757612 00:16:50.820 01:38:02 -- common/autotest_common.sh@950 -- # wait 3757612 00:16:50.820 01:38:02 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:50.820 01:38:02 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:50.820 00:16:50.820 real 0m33.051s 00:16:50.820 user 0m33.976s 00:16:50.820 sys 0m26.567s 00:16:50.820 01:38:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.820 01:38:02 -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 ************************************ 00:16:50.820 END TEST nvmf_vfio_user_fuzz 00:16:50.820 ************************************ 00:16:50.820 01:38:02 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.820 01:38:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:50.820 01:38:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:50.820 01:38:02 -- common/autotest_common.sh@10 -- # set +x 00:16:50.820 ************************************ 00:16:50.820 START TEST nvmf_host_management 00:16:50.820 ************************************ 00:16:50.820 01:38:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:50.820 * Looking for test storage... 00:16:50.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.820 01:38:02 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.820 01:38:02 -- nvmf/common.sh@7 -- # uname -s 00:16:50.820 01:38:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.820 01:38:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.820 01:38:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.820 01:38:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.820 01:38:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.820 01:38:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.820 01:38:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.820 01:38:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.820 01:38:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.820 01:38:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.820 01:38:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.820 01:38:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:50.820 01:38:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.820 01:38:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.820 01:38:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.820 01:38:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.820 01:38:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.820 01:38:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.820 01:38:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.820 01:38:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.820 01:38:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.820 01:38:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.820 01:38:02 -- paths/export.sh@5 -- # export PATH 00:16:50.820 01:38:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.820 01:38:02 -- nvmf/common.sh@46 -- # : 0 00:16:50.820 01:38:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.820 01:38:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.820 01:38:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.820 01:38:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.820 01:38:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.820 01:38:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.820 01:38:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.820 01:38:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.820 01:38:02 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:50.820 01:38:02 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:50.820 01:38:02 -- target/host_management.sh@104 -- # nvmftestinit 00:16:50.820 01:38:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:50.820 01:38:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.820 01:38:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.820 01:38:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.820 01:38:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.820 01:38:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.820 01:38:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.820 01:38:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.820 01:38:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:50.820 01:38:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:50.820 01:38:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:50.820 01:38:02 -- common/autotest_common.sh@10 -- # set +x 00:16:51.436 01:38:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:51.436 01:38:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:51.436 01:38:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:51.436 01:38:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:51.436 01:38:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:51.436 01:38:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:51.436 01:38:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:51.436 01:38:04 -- nvmf/common.sh@294 -- # net_devs=() 00:16:51.436 01:38:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:51.436 01:38:04 -- nvmf/common.sh@295 -- # e810=() 00:16:51.436 01:38:04 -- nvmf/common.sh@295 -- # local -ga e810 00:16:51.436 01:38:04 -- nvmf/common.sh@296 -- # x722=() 00:16:51.436 01:38:04 -- nvmf/common.sh@296 -- # local -ga x722 00:16:51.436 01:38:04 -- nvmf/common.sh@297 -- # mlx=() 00:16:51.436 01:38:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:51.436 01:38:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:51.436 01:38:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:51.436 01:38:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:51.436 01:38:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.436 01:38:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:51.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:51.436 01:38:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:51.436 01:38:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:51.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:51.436 01:38:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.436 01:38:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.436 01:38:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.436 01:38:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:51.436 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:51.436 01:38:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.436 01:38:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:51.436 01:38:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:51.436 01:38:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:51.436 01:38:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:51.436 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:51.436 01:38:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:51.436 01:38:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:51.436 01:38:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:51.436 01:38:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:51.436 01:38:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.436 01:38:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:51.436 01:38:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:51.436 01:38:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:51.436 01:38:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:51.436 01:38:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:51.436 01:38:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:51.436 01:38:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:51.436 01:38:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.436 01:38:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:51.436 01:38:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:51.436 01:38:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:51.436 01:38:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:51.694 01:38:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:51.694 01:38:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:51.694 01:38:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:51.694 01:38:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:51.694 01:38:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:51.694 01:38:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:51.694 01:38:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:51.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:51.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:16:51.694 00:16:51.694 --- 10.0.0.2 ping statistics --- 00:16:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.694 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:16:51.694 01:38:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:51.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:51.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:51.694 00:16:51.694 --- 10.0.0.1 ping statistics --- 00:16:51.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:51.694 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:51.694 01:38:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:51.694 01:38:04 -- nvmf/common.sh@410 -- # return 0 00:16:51.694 01:38:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:51.694 01:38:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:51.694 01:38:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:51.694 01:38:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:51.694 01:38:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:51.694 01:38:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:51.694 01:38:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:51.694 01:38:04 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:51.694 01:38:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:51.694 01:38:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.694 01:38:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.694 ************************************ 00:16:51.694 START TEST nvmf_host_management 00:16:51.694 ************************************ 00:16:51.694 01:38:04 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:51.694 01:38:04 -- target/host_management.sh@69 -- # starttarget 00:16:51.694 01:38:04 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:51.694 01:38:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:51.694 01:38:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:51.694 01:38:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.694 01:38:04 -- nvmf/common.sh@469 -- # nvmfpid=3763315 00:16:51.694 01:38:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:51.694 01:38:04 -- nvmf/common.sh@470 -- # waitforlisten 3763315 00:16:51.694 01:38:04 -- common/autotest_common.sh@819 -- # '[' -z 3763315 ']' 00:16:51.694 01:38:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.694 01:38:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.694 01:38:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.694 01:38:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.694 01:38:04 -- common/autotest_common.sh@10 -- # set +x 00:16:51.694 [2024-07-23 01:38:04.654795] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:51.694 [2024-07-23 01:38:04.654866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:51.694 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.694 [2024-07-23 01:38:04.719644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:51.953 [2024-07-23 01:38:04.805532] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:51.953 [2024-07-23 01:38:04.805691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:51.953 [2024-07-23 01:38:04.805711] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:51.953 [2024-07-23 01:38:04.805724] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:51.953 [2024-07-23 01:38:04.805825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.953 [2024-07-23 01:38:04.805880] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:51.953 [2024-07-23 01:38:04.805928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:51.953 [2024-07-23 01:38:04.805931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.520 01:38:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:52.520 01:38:05 -- common/autotest_common.sh@852 -- # return 0 00:16:52.520 01:38:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:52.520 01:38:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.520 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.520 01:38:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.520 01:38:05 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.520 01:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.520 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.520 [2024-07-23 01:38:05.594056] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.520 01:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.520 01:38:05 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:52.520 01:38:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.520 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.520 01:38:05 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:52.520 01:38:05 -- target/host_management.sh@23 -- # cat 00:16:52.520 01:38:05 -- target/host_management.sh@30 -- # rpc_cmd 00:16:52.520 01:38:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:52.520 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.779 Malloc0 00:16:52.779 [2024-07-23 01:38:05.657296] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.779 01:38:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:52.779 01:38:05 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:52.779 01:38:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:52.779 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.779 01:38:05 -- target/host_management.sh@73 -- # perfpid=3763488 00:16:52.779 01:38:05 -- target/host_management.sh@74 -- # waitforlisten 3763488 /var/tmp/bdevperf.sock 00:16:52.779 01:38:05 -- common/autotest_common.sh@819 -- # '[' -z 3763488 ']' 00:16:52.779 01:38:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:52.779 01:38:05 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:52.779 01:38:05 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:52.779 01:38:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.779 01:38:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:52.779 01:38:05 -- nvmf/common.sh@520 -- # config=() 00:16:52.779 01:38:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.779 01:38:05 -- nvmf/common.sh@520 -- # local subsystem config 00:16:52.779 01:38:05 -- common/autotest_common.sh@10 -- # set +x 00:16:52.779 01:38:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:52.779 01:38:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:52.779 { 00:16:52.779 "params": { 00:16:52.779 "name": "Nvme$subsystem", 00:16:52.779 "trtype": "$TEST_TRANSPORT", 00:16:52.779 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:52.779 "adrfam": "ipv4", 00:16:52.779 "trsvcid": "$NVMF_PORT", 00:16:52.779 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:52.779 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:52.779 "hdgst": ${hdgst:-false}, 00:16:52.779 "ddgst": ${ddgst:-false} 00:16:52.779 }, 00:16:52.779 "method": "bdev_nvme_attach_controller" 00:16:52.779 } 00:16:52.779 EOF 00:16:52.779 )") 00:16:52.779 01:38:05 -- nvmf/common.sh@542 -- # cat 00:16:52.779 01:38:05 -- nvmf/common.sh@544 -- # jq . 00:16:52.779 01:38:05 -- nvmf/common.sh@545 -- # IFS=, 00:16:52.779 01:38:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:52.779 "params": { 00:16:52.779 "name": "Nvme0", 00:16:52.779 "trtype": "tcp", 00:16:52.779 "traddr": "10.0.0.2", 00:16:52.779 "adrfam": "ipv4", 00:16:52.779 "trsvcid": "4420", 00:16:52.779 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:52.779 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:52.779 "hdgst": false, 00:16:52.779 "ddgst": false 00:16:52.779 }, 00:16:52.779 "method": "bdev_nvme_attach_controller" 00:16:52.779 }' 00:16:52.779 [2024-07-23 01:38:05.730352] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:52.779 [2024-07-23 01:38:05.730449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763488 ] 00:16:52.779 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.779 [2024-07-23 01:38:05.792504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.037 [2024-07-23 01:38:05.878531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.295 Running I/O for 10 seconds... 00:16:53.867 01:38:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.867 01:38:06 -- common/autotest_common.sh@852 -- # return 0 00:16:53.867 01:38:06 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:53.867 01:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.867 01:38:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.867 01:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.867 01:38:06 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:53.867 01:38:06 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:53.867 01:38:06 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:53.867 01:38:06 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:53.867 01:38:06 -- target/host_management.sh@52 -- # local ret=1 00:16:53.867 01:38:06 -- target/host_management.sh@53 -- # local i 00:16:53.867 01:38:06 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:53.867 01:38:06 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:53.867 01:38:06 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:53.867 01:38:06 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:53.867 01:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.867 01:38:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.867 01:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.867 01:38:06 -- target/host_management.sh@55 -- # read_io_count=1195 00:16:53.867 01:38:06 -- target/host_management.sh@58 -- # '[' 1195 -ge 100 ']' 00:16:53.867 01:38:06 -- target/host_management.sh@59 -- # ret=0 00:16:53.867 01:38:06 -- target/host_management.sh@60 -- # break 00:16:53.867 01:38:06 -- target/host_management.sh@64 -- # return 0 00:16:53.867 01:38:06 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.867 01:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.867 01:38:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.867 [2024-07-23 01:38:06.716875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.716950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.716966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.716979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.716991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717798] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.717995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.718009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.718023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.718036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144baf0 is same with the state(5) to be set 00:16:53.867 [2024-07-23 01:38:06.719992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.867 [2024-07-23 01:38:06.720495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.867 [2024-07-23 01:38:06.720511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.720963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.720984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 01:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.868 [2024-07-23 01:38:06.721254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 01:38:06 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:53.868 [2024-07-23 01:38:06.721409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 01:38:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.868 [2024-07-23 01:38:06.721543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 01:38:06 -- common/autotest_common.sh@10 -- # set +x 00:16:53.868 [2024-07-23 01:38:06.721678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.868 [2024-07-23 01:38:06.721712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.868 [2024-07-23 01:38:06.721728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.721972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.721991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:53.869 [2024-07-23 01:38:06.722200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722295] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1fc2c00 was disconnected and freed. reset controller. 00:16:53.869 [2024-07-23 01:38:06.722372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.869 [2024-07-23 01:38:06.722397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.869 [2024-07-23 01:38:06.722429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.869 [2024-07-23 01:38:06.722461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:53.869 [2024-07-23 01:38:06.722491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:53.869 [2024-07-23 01:38:06.722506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fc5030 is same with the state(5) to be set 00:16:53.869 [2024-07-23 01:38:06.723619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:53.869 task offset: 36864 on job bdev=Nvme0n1 fails 00:16:53.869 00:16:53.869 Latency(us) 00:16:53.869 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.869 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:53.869 Job: Nvme0n1 ended in about 0.55 seconds with error 00:16:53.869 Verification LBA range: start 0x0 length 0x400 00:16:53.869 Nvme0n1 : 0.55 2382.95 148.93 116.24 0.00 25282.68 2912.71 28350.39 00:16:53.869 =================================================================================================================== 00:16:53.869 Total : 2382.95 148.93 116.24 0.00 25282.68 2912.71 28350.39 00:16:53.869 [2024-07-23 01:38:06.725487] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:53.869 [2024-07-23 01:38:06.725518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc5030 (9): Bad file descriptor 00:16:53.869 01:38:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.869 01:38:06 -- target/host_management.sh@87 -- # sleep 1 00:16:53.869 [2024-07-23 01:38:06.817781] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:54.803 01:38:07 -- target/host_management.sh@91 -- # kill -9 3763488 00:16:54.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3763488) - No such process 00:16:54.803 01:38:07 -- target/host_management.sh@91 -- # true 00:16:54.803 01:38:07 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:54.803 01:38:07 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:54.803 01:38:07 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:54.803 01:38:07 -- nvmf/common.sh@520 -- # config=() 00:16:54.803 01:38:07 -- nvmf/common.sh@520 -- # local subsystem config 00:16:54.803 01:38:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:54.803 01:38:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:54.803 { 00:16:54.803 "params": { 00:16:54.803 "name": "Nvme$subsystem", 00:16:54.803 "trtype": "$TEST_TRANSPORT", 00:16:54.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.803 "adrfam": "ipv4", 00:16:54.803 "trsvcid": "$NVMF_PORT", 00:16:54.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.803 "hdgst": ${hdgst:-false}, 00:16:54.803 "ddgst": ${ddgst:-false} 00:16:54.803 }, 00:16:54.803 "method": "bdev_nvme_attach_controller" 00:16:54.803 } 00:16:54.803 EOF 00:16:54.803 )") 00:16:54.803 01:38:07 -- nvmf/common.sh@542 -- # cat 00:16:54.803 01:38:07 -- nvmf/common.sh@544 -- # jq . 00:16:54.803 01:38:07 -- nvmf/common.sh@545 -- # IFS=, 00:16:54.803 01:38:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:54.803 "params": { 00:16:54.803 "name": "Nvme0", 00:16:54.803 "trtype": "tcp", 00:16:54.803 "traddr": "10.0.0.2", 00:16:54.803 "adrfam": "ipv4", 00:16:54.803 "trsvcid": "4420", 00:16:54.803 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:54.803 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:54.803 "hdgst": false, 00:16:54.803 "ddgst": false 00:16:54.803 }, 00:16:54.803 "method": "bdev_nvme_attach_controller" 00:16:54.803 }' 00:16:54.803 [2024-07-23 01:38:07.771381] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:54.803 [2024-07-23 01:38:07.771473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763772 ] 00:16:54.803 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.803 [2024-07-23 01:38:07.833585] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.062 [2024-07-23 01:38:07.919459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.062 Running I/O for 1 seconds... 00:16:56.439 00:16:56.440 Latency(us) 00:16:56.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.440 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.440 Verification LBA range: start 0x0 length 0x400 00:16:56.440 Nvme0n1 : 1.01 2820.84 176.30 0.00 0.00 22370.41 3470.98 30098.01 00:16:56.440 =================================================================================================================== 00:16:56.440 Total : 2820.84 176.30 0.00 0.00 22370.41 3470.98 30098.01 00:16:56.440 01:38:09 -- target/host_management.sh@101 -- # stoptarget 00:16:56.440 01:38:09 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:56.440 01:38:09 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:56.440 01:38:09 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:56.440 01:38:09 -- target/host_management.sh@40 -- # nvmftestfini 00:16:56.440 01:38:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:56.440 01:38:09 -- nvmf/common.sh@116 -- # sync 00:16:56.440 01:38:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:56.440 01:38:09 -- nvmf/common.sh@119 -- # set +e 00:16:56.440 01:38:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:56.440 01:38:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:56.440 rmmod nvme_tcp 00:16:56.440 rmmod nvme_fabrics 00:16:56.440 rmmod nvme_keyring 00:16:56.440 01:38:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:56.440 01:38:09 -- nvmf/common.sh@123 -- # set -e 00:16:56.440 01:38:09 -- nvmf/common.sh@124 -- # return 0 00:16:56.440 01:38:09 -- nvmf/common.sh@477 -- # '[' -n 3763315 ']' 00:16:56.440 01:38:09 -- nvmf/common.sh@478 -- # killprocess 3763315 00:16:56.440 01:38:09 -- common/autotest_common.sh@926 -- # '[' -z 3763315 ']' 00:16:56.440 01:38:09 -- common/autotest_common.sh@930 -- # kill -0 3763315 00:16:56.440 01:38:09 -- common/autotest_common.sh@931 -- # uname 00:16:56.440 01:38:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.440 01:38:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3763315 00:16:56.440 01:38:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:56.440 01:38:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:56.440 01:38:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3763315' 00:16:56.440 killing process with pid 3763315 00:16:56.440 01:38:09 -- common/autotest_common.sh@945 -- # kill 3763315 00:16:56.440 01:38:09 -- common/autotest_common.sh@950 -- # wait 3763315 00:16:56.698 [2024-07-23 01:38:09.646313] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:56.698 01:38:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:56.698 01:38:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:56.698 01:38:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:56.698 01:38:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.698 01:38:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:56.698 01:38:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.698 01:38:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.698 01:38:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.232 01:38:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:59.232 00:16:59.232 real 0m7.110s 00:16:59.232 user 0m21.315s 00:16:59.232 sys 0m1.491s 00:16:59.232 01:38:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.232 01:38:11 -- common/autotest_common.sh@10 -- # set +x 00:16:59.232 ************************************ 00:16:59.232 END TEST nvmf_host_management 00:16:59.232 ************************************ 00:16:59.232 01:38:11 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:59.232 00:16:59.232 real 0m9.218s 00:16:59.232 user 0m22.012s 00:16:59.232 sys 0m2.920s 00:16:59.232 01:38:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.232 01:38:11 -- common/autotest_common.sh@10 -- # set +x 00:16:59.232 ************************************ 00:16:59.232 END TEST nvmf_host_management 00:16:59.232 ************************************ 00:16:59.232 01:38:11 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:59.232 01:38:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:59.232 01:38:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:59.232 01:38:11 -- common/autotest_common.sh@10 -- # set +x 00:16:59.232 ************************************ 00:16:59.232 START TEST nvmf_lvol 00:16:59.232 ************************************ 00:16:59.232 01:38:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:59.232 * Looking for test storage... 00:16:59.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.232 01:38:11 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.232 01:38:11 -- nvmf/common.sh@7 -- # uname -s 00:16:59.232 01:38:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.232 01:38:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.232 01:38:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.232 01:38:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.232 01:38:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.232 01:38:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.232 01:38:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.232 01:38:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.232 01:38:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.232 01:38:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.232 01:38:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.232 01:38:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.232 01:38:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.232 01:38:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.232 01:38:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.232 01:38:11 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.232 01:38:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.232 01:38:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.232 01:38:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.232 01:38:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.232 01:38:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.233 01:38:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.233 01:38:11 -- paths/export.sh@5 -- # export PATH 00:16:59.233 01:38:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.233 01:38:11 -- nvmf/common.sh@46 -- # : 0 00:16:59.233 01:38:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:59.233 01:38:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:59.233 01:38:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:59.233 01:38:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.233 01:38:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.233 01:38:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:59.233 01:38:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:59.233 01:38:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.233 01:38:11 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:59.233 01:38:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:59.233 01:38:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.233 01:38:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:59.233 01:38:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:59.233 01:38:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:59.233 01:38:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.233 01:38:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.233 01:38:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.233 01:38:11 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:59.233 01:38:11 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:59.233 01:38:11 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:59.233 01:38:11 -- common/autotest_common.sh@10 -- # set +x 00:17:01.137 01:38:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.137 01:38:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:01.137 01:38:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:01.137 01:38:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:01.137 01:38:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:01.137 01:38:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:01.137 01:38:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:01.137 01:38:13 -- nvmf/common.sh@294 -- # net_devs=() 00:17:01.137 01:38:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:01.137 01:38:13 -- nvmf/common.sh@295 -- # e810=() 00:17:01.137 01:38:13 -- nvmf/common.sh@295 -- # local -ga e810 00:17:01.137 01:38:13 -- nvmf/common.sh@296 -- # x722=() 00:17:01.137 01:38:13 -- nvmf/common.sh@296 -- # local -ga x722 00:17:01.137 01:38:13 -- nvmf/common.sh@297 -- # mlx=() 00:17:01.137 01:38:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:01.137 01:38:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.137 01:38:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:01.137 01:38:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:01.137 01:38:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.137 01:38:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.137 01:38:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.137 01:38:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.137 01:38:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.137 01:38:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.137 01:38:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.137 01:38:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.137 01:38:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.137 01:38:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.137 01:38:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.137 01:38:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.137 01:38:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.137 01:38:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.137 01:38:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:01.137 01:38:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:01.137 01:38:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:01.137 01:38:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.137 01:38:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.137 01:38:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.137 01:38:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:01.137 01:38:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.137 01:38:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.137 01:38:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:01.137 01:38:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.137 01:38:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.137 01:38:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:01.137 01:38:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:01.137 01:38:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.137 01:38:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.137 01:38:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.137 01:38:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.137 01:38:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:01.137 01:38:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.137 01:38:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.137 01:38:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.137 01:38:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:01.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:17:01.138 00:17:01.138 --- 10.0.0.2 ping statistics --- 00:17:01.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.138 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:01.138 01:38:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:17:01.138 00:17:01.138 --- 10.0.0.1 ping statistics --- 00:17:01.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.138 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:17:01.138 01:38:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.138 01:38:13 -- nvmf/common.sh@410 -- # return 0 00:17:01.138 01:38:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.138 01:38:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.138 01:38:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.138 01:38:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.138 01:38:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.138 01:38:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.138 01:38:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.138 01:38:13 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:01.138 01:38:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.138 01:38:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:01.138 01:38:13 -- common/autotest_common.sh@10 -- # set +x 00:17:01.138 01:38:13 -- nvmf/common.sh@469 -- # nvmfpid=3765883 00:17:01.138 01:38:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:01.138 01:38:13 -- nvmf/common.sh@470 -- # waitforlisten 3765883 00:17:01.138 01:38:13 -- common/autotest_common.sh@819 -- # '[' -z 3765883 ']' 00:17:01.138 01:38:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.138 01:38:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.138 01:38:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.138 01:38:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.138 01:38:13 -- common/autotest_common.sh@10 -- # set +x 00:17:01.138 [2024-07-23 01:38:13.975386] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:01.138 [2024-07-23 01:38:13.975483] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.138 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.138 [2024-07-23 01:38:14.044081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.138 [2024-07-23 01:38:14.131972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.138 [2024-07-23 01:38:14.132158] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.138 [2024-07-23 01:38:14.132178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.138 [2024-07-23 01:38:14.132194] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.138 [2024-07-23 01:38:14.132288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.138 [2024-07-23 01:38:14.132344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.138 [2024-07-23 01:38:14.132361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.075 01:38:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.075 01:38:14 -- common/autotest_common.sh@852 -- # return 0 00:17:02.075 01:38:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:02.075 01:38:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:02.075 01:38:14 -- common/autotest_common.sh@10 -- # set +x 00:17:02.075 01:38:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.075 01:38:14 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.075 [2024-07-23 01:38:15.131071] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.075 01:38:15 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.333 01:38:15 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:02.333 01:38:15 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:02.592 01:38:15 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:02.592 01:38:15 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:02.850 01:38:15 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:03.108 01:38:16 -- target/nvmf_lvol.sh@29 -- # lvs=b156930c-b7a0-498e-8dad-11c577986850 00:17:03.108 01:38:16 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b156930c-b7a0-498e-8dad-11c577986850 lvol 20 00:17:03.366 01:38:16 -- target/nvmf_lvol.sh@32 -- # lvol=acc6945f-dc4c-4eb1-a32e-01726fe3eb91 00:17:03.366 01:38:16 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:03.624 01:38:16 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 acc6945f-dc4c-4eb1-a32e-01726fe3eb91 00:17:03.881 01:38:16 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.139 [2024-07-23 01:38:17.106941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.139 01:38:17 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:04.397 01:38:17 -- target/nvmf_lvol.sh@42 -- # perf_pid=3766330 00:17:04.397 01:38:17 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:04.397 01:38:17 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:04.397 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.334 01:38:18 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot acc6945f-dc4c-4eb1-a32e-01726fe3eb91 MY_SNAPSHOT 00:17:05.592 01:38:18 -- target/nvmf_lvol.sh@47 -- # snapshot=797abb01-2551-40c2-968f-ec71ee9c5fdd 00:17:05.592 01:38:18 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize acc6945f-dc4c-4eb1-a32e-01726fe3eb91 30 00:17:06.161 01:38:18 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 797abb01-2551-40c2-968f-ec71ee9c5fdd MY_CLONE 00:17:06.161 01:38:19 -- target/nvmf_lvol.sh@49 -- # clone=a0ed419f-cf42-4b6d-b286-25ae6821b3fa 00:17:06.161 01:38:19 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate a0ed419f-cf42-4b6d-b286-25ae6821b3fa 00:17:06.730 01:38:19 -- target/nvmf_lvol.sh@53 -- # wait 3766330 00:17:14.903 Initializing NVMe Controllers 00:17:14.903 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:14.903 Controller IO queue size 128, less than required. 00:17:14.903 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:14.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:14.903 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:14.903 Initialization complete. Launching workers. 00:17:14.903 ======================================================== 00:17:14.903 Latency(us) 00:17:14.903 Device Information : IOPS MiB/s Average min max 00:17:14.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10793.50 42.16 11863.36 1414.40 78879.65 00:17:14.903 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10750.90 42.00 11909.27 1902.51 73151.18 00:17:14.903 ======================================================== 00:17:14.903 Total : 21544.40 84.16 11886.27 1414.40 78879.65 00:17:14.903 00:17:14.903 01:38:27 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:15.162 01:38:28 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete acc6945f-dc4c-4eb1-a32e-01726fe3eb91 00:17:15.420 01:38:28 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b156930c-b7a0-498e-8dad-11c577986850 00:17:15.678 01:38:28 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:15.678 01:38:28 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:15.678 01:38:28 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:15.678 01:38:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:15.678 01:38:28 -- nvmf/common.sh@116 -- # sync 00:17:15.678 01:38:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:15.678 01:38:28 -- nvmf/common.sh@119 -- # set +e 00:17:15.678 01:38:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:15.678 01:38:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:15.678 rmmod nvme_tcp 00:17:15.678 rmmod nvme_fabrics 00:17:15.678 rmmod nvme_keyring 00:17:15.678 01:38:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:15.678 01:38:28 -- nvmf/common.sh@123 -- # set -e 00:17:15.678 01:38:28 -- nvmf/common.sh@124 -- # return 0 00:17:15.678 01:38:28 -- nvmf/common.sh@477 -- # '[' -n 3765883 ']' 00:17:15.678 01:38:28 -- nvmf/common.sh@478 -- # killprocess 3765883 00:17:15.678 01:38:28 -- common/autotest_common.sh@926 -- # '[' -z 3765883 ']' 00:17:15.678 01:38:28 -- common/autotest_common.sh@930 -- # kill -0 3765883 00:17:15.678 01:38:28 -- common/autotest_common.sh@931 -- # uname 00:17:15.678 01:38:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.678 01:38:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3765883 00:17:15.678 01:38:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:15.678 01:38:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:15.678 01:38:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3765883' 00:17:15.678 killing process with pid 3765883 00:17:15.678 01:38:28 -- common/autotest_common.sh@945 -- # kill 3765883 00:17:15.678 01:38:28 -- common/autotest_common.sh@950 -- # wait 3765883 00:17:15.937 01:38:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:15.937 01:38:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:15.937 01:38:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:15.937 01:38:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:15.937 01:38:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:15.937 01:38:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.937 01:38:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.937 01:38:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.476 01:38:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:18.476 00:17:18.476 real 0m19.268s 00:17:18.476 user 1m3.919s 00:17:18.476 sys 0m6.235s 00:17:18.476 01:38:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.476 01:38:31 -- common/autotest_common.sh@10 -- # set +x 00:17:18.476 ************************************ 00:17:18.476 END TEST nvmf_lvol 00:17:18.476 ************************************ 00:17:18.476 01:38:31 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:18.476 01:38:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:18.476 01:38:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.476 01:38:31 -- common/autotest_common.sh@10 -- # set +x 00:17:18.476 ************************************ 00:17:18.476 START TEST nvmf_lvs_grow 00:17:18.476 ************************************ 00:17:18.476 01:38:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:18.476 * Looking for test storage... 00:17:18.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:18.476 01:38:31 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.476 01:38:31 -- nvmf/common.sh@7 -- # uname -s 00:17:18.476 01:38:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.476 01:38:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.476 01:38:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.476 01:38:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.476 01:38:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.476 01:38:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.476 01:38:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.476 01:38:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.476 01:38:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.476 01:38:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.476 01:38:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.476 01:38:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:18.476 01:38:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.476 01:38:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.476 01:38:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.476 01:38:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:18.476 01:38:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.476 01:38:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.476 01:38:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.476 01:38:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.476 01:38:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.476 01:38:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.476 01:38:31 -- paths/export.sh@5 -- # export PATH 00:17:18.476 01:38:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.476 01:38:31 -- nvmf/common.sh@46 -- # : 0 00:17:18.476 01:38:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:18.476 01:38:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:18.476 01:38:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:18.476 01:38:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.476 01:38:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.476 01:38:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:18.476 01:38:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:18.476 01:38:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:18.476 01:38:31 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.476 01:38:31 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:18.476 01:38:31 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:18.476 01:38:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:18.476 01:38:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.476 01:38:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:18.476 01:38:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:18.476 01:38:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:18.476 01:38:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.476 01:38:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.476 01:38:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.476 01:38:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:18.476 01:38:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:18.476 01:38:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:18.476 01:38:31 -- common/autotest_common.sh@10 -- # set +x 00:17:20.381 01:38:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:20.381 01:38:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:20.381 01:38:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:20.381 01:38:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:20.381 01:38:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:20.381 01:38:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:20.381 01:38:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:20.381 01:38:33 -- nvmf/common.sh@294 -- # net_devs=() 00:17:20.381 01:38:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:20.381 01:38:33 -- nvmf/common.sh@295 -- # e810=() 00:17:20.381 01:38:33 -- nvmf/common.sh@295 -- # local -ga e810 00:17:20.381 01:38:33 -- nvmf/common.sh@296 -- # x722=() 00:17:20.381 01:38:33 -- nvmf/common.sh@296 -- # local -ga x722 00:17:20.381 01:38:33 -- nvmf/common.sh@297 -- # mlx=() 00:17:20.381 01:38:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:20.382 01:38:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:20.382 01:38:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.382 01:38:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:20.382 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:20.382 01:38:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:20.382 01:38:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:20.382 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:20.382 01:38:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.382 01:38:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.382 01:38:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.382 01:38:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:20.382 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:20.382 01:38:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:20.382 01:38:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:20.382 01:38:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:20.382 01:38:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:20.382 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:20.382 01:38:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:20.382 01:38:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:20.382 01:38:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:20.382 01:38:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:20.382 01:38:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:20.382 01:38:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:20.382 01:38:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:20.382 01:38:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:20.382 01:38:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:20.382 01:38:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:20.382 01:38:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:20.382 01:38:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:20.382 01:38:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:20.382 01:38:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:20.382 01:38:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:20.382 01:38:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:20.382 01:38:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:20.382 01:38:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:20.382 01:38:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:20.382 01:38:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:20.382 01:38:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:20.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:20.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:17:20.382 00:17:20.382 --- 10.0.0.2 ping statistics --- 00:17:20.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.382 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:20.382 01:38:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:20.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:20.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:20.382 00:17:20.382 --- 10.0.0.1 ping statistics --- 00:17:20.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:20.382 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:20.382 01:38:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:20.382 01:38:33 -- nvmf/common.sh@410 -- # return 0 00:17:20.382 01:38:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:20.382 01:38:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:20.382 01:38:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:20.382 01:38:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:20.382 01:38:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:20.382 01:38:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:20.382 01:38:33 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:20.382 01:38:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:20.382 01:38:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:20.382 01:38:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.382 01:38:33 -- nvmf/common.sh@469 -- # nvmfpid=3769635 00:17:20.382 01:38:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:20.382 01:38:33 -- nvmf/common.sh@470 -- # waitforlisten 3769635 00:17:20.382 01:38:33 -- common/autotest_common.sh@819 -- # '[' -z 3769635 ']' 00:17:20.382 01:38:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.382 01:38:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.382 01:38:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.382 01:38:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.382 01:38:33 -- common/autotest_common.sh@10 -- # set +x 00:17:20.382 [2024-07-23 01:38:33.253862] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:20.382 [2024-07-23 01:38:33.253950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:20.382 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.382 [2024-07-23 01:38:33.318283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.382 [2024-07-23 01:38:33.404176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:20.382 [2024-07-23 01:38:33.404330] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:20.382 [2024-07-23 01:38:33.404348] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:20.382 [2024-07-23 01:38:33.404360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:20.382 [2024-07-23 01:38:33.404390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.320 01:38:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.320 01:38:34 -- common/autotest_common.sh@852 -- # return 0 00:17:21.320 01:38:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:21.320 01:38:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:21.320 01:38:34 -- common/autotest_common.sh@10 -- # set +x 00:17:21.320 01:38:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.320 01:38:34 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:21.578 [2024-07-23 01:38:34.460494] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:21.578 01:38:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:21.578 01:38:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:21.578 01:38:34 -- common/autotest_common.sh@10 -- # set +x 00:17:21.578 ************************************ 00:17:21.578 START TEST lvs_grow_clean 00:17:21.578 ************************************ 00:17:21.578 01:38:34 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:21.578 01:38:34 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:21.838 01:38:34 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:21.838 01:38:34 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:22.097 01:38:34 -- target/nvmf_lvs_grow.sh@28 -- # lvs=33bbf6bb-a998-4543-981e-6219dee3a995 00:17:22.097 01:38:34 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:22.097 01:38:34 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:22.356 01:38:35 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:22.356 01:38:35 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:22.356 01:38:35 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33bbf6bb-a998-4543-981e-6219dee3a995 lvol 150 00:17:22.616 01:38:35 -- target/nvmf_lvs_grow.sh@33 -- # lvol=b41a777d-4f3b-424f-b597-9702cd14fa94 00:17:22.616 01:38:35 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.616 01:38:35 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:22.616 [2024-07-23 01:38:35.713802] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:22.616 [2024-07-23 01:38:35.713893] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:22.877 true 00:17:22.877 01:38:35 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:22.877 01:38:35 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:23.136 01:38:35 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:23.136 01:38:35 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:23.395 01:38:36 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b41a777d-4f3b-424f-b597-9702cd14fa94 00:17:23.654 01:38:36 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:23.912 [2024-07-23 01:38:36.801173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.912 01:38:36 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:24.171 01:38:37 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3770215 00:17:24.171 01:38:37 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:24.171 01:38:37 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.171 01:38:37 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3770215 /var/tmp/bdevperf.sock 00:17:24.171 01:38:37 -- common/autotest_common.sh@819 -- # '[' -z 3770215 ']' 00:17:24.171 01:38:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.171 01:38:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.171 01:38:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.171 01:38:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.171 01:38:37 -- common/autotest_common.sh@10 -- # set +x 00:17:24.171 [2024-07-23 01:38:37.130533] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:24.171 [2024-07-23 01:38:37.130633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3770215 ] 00:17:24.171 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.171 [2024-07-23 01:38:37.193783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.430 [2024-07-23 01:38:37.282457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.999 01:38:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:24.999 01:38:38 -- common/autotest_common.sh@852 -- # return 0 00:17:24.999 01:38:38 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:25.567 Nvme0n1 00:17:25.567 01:38:38 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:25.827 [ 00:17:25.827 { 00:17:25.827 "name": "Nvme0n1", 00:17:25.827 "aliases": [ 00:17:25.827 "b41a777d-4f3b-424f-b597-9702cd14fa94" 00:17:25.827 ], 00:17:25.827 "product_name": "NVMe disk", 00:17:25.827 "block_size": 4096, 00:17:25.827 "num_blocks": 38912, 00:17:25.827 "uuid": "b41a777d-4f3b-424f-b597-9702cd14fa94", 00:17:25.827 "assigned_rate_limits": { 00:17:25.827 "rw_ios_per_sec": 0, 00:17:25.827 "rw_mbytes_per_sec": 0, 00:17:25.827 "r_mbytes_per_sec": 0, 00:17:25.827 "w_mbytes_per_sec": 0 00:17:25.827 }, 00:17:25.827 "claimed": false, 00:17:25.827 "zoned": false, 00:17:25.827 "supported_io_types": { 00:17:25.827 "read": true, 00:17:25.827 "write": true, 00:17:25.827 "unmap": true, 00:17:25.827 "write_zeroes": true, 00:17:25.827 "flush": true, 00:17:25.827 "reset": true, 00:17:25.827 "compare": true, 00:17:25.827 "compare_and_write": true, 00:17:25.827 "abort": true, 00:17:25.827 "nvme_admin": true, 00:17:25.827 "nvme_io": true 00:17:25.827 }, 00:17:25.827 "driver_specific": { 00:17:25.827 "nvme": [ 00:17:25.827 { 00:17:25.827 "trid": { 00:17:25.827 "trtype": "TCP", 00:17:25.827 "adrfam": "IPv4", 00:17:25.827 "traddr": "10.0.0.2", 00:17:25.827 "trsvcid": "4420", 00:17:25.827 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:25.827 }, 00:17:25.827 "ctrlr_data": { 00:17:25.827 "cntlid": 1, 00:17:25.827 "vendor_id": "0x8086", 00:17:25.827 "model_number": "SPDK bdev Controller", 00:17:25.827 "serial_number": "SPDK0", 00:17:25.827 "firmware_revision": "24.01.1", 00:17:25.827 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.827 "oacs": { 00:17:25.827 "security": 0, 00:17:25.827 "format": 0, 00:17:25.827 "firmware": 0, 00:17:25.827 "ns_manage": 0 00:17:25.827 }, 00:17:25.827 "multi_ctrlr": true, 00:17:25.827 "ana_reporting": false 00:17:25.827 }, 00:17:25.827 "vs": { 00:17:25.827 "nvme_version": "1.3" 00:17:25.827 }, 00:17:25.827 "ns_data": { 00:17:25.827 "id": 1, 00:17:25.827 "can_share": true 00:17:25.827 } 00:17:25.827 } 00:17:25.827 ], 00:17:25.827 "mp_policy": "active_passive" 00:17:25.827 } 00:17:25.827 } 00:17:25.827 ] 00:17:25.827 01:38:38 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3770365 00:17:25.827 01:38:38 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:25.827 01:38:38 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:25.827 Running I/O for 10 seconds... 00:17:26.767 Latency(us) 00:17:26.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.767 Nvme0n1 : 1.00 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:17:26.767 =================================================================================================================== 00:17:26.767 Total : 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:17:26.767 00:17:27.702 01:38:40 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:27.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.960 Nvme0n1 : 2.00 14499.50 56.64 0.00 0.00 0.00 0.00 0.00 00:17:27.960 =================================================================================================================== 00:17:27.960 Total : 14499.50 56.64 0.00 0.00 0.00 0.00 0.00 00:17:27.960 00:17:27.960 true 00:17:27.960 01:38:41 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:27.960 01:38:41 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:28.219 01:38:41 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:28.219 01:38:41 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:28.219 01:38:41 -- target/nvmf_lvs_grow.sh@65 -- # wait 3770365 00:17:28.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.787 Nvme0n1 : 3.00 14594.67 57.01 0.00 0.00 0.00 0.00 0.00 00:17:28.787 =================================================================================================================== 00:17:28.787 Total : 14594.67 57.01 0.00 0.00 0.00 0.00 0.00 00:17:28.787 00:17:30.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.203 Nvme0n1 : 4.00 14690.00 57.38 0.00 0.00 0.00 0.00 0.00 00:17:30.203 =================================================================================================================== 00:17:30.203 Total : 14690.00 57.38 0.00 0.00 0.00 0.00 0.00 00:17:30.203 00:17:31.140 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.140 Nvme0n1 : 5.00 14759.80 57.66 0.00 0.00 0.00 0.00 0.00 00:17:31.140 =================================================================================================================== 00:17:31.140 Total : 14759.80 57.66 0.00 0.00 0.00 0.00 0.00 00:17:31.140 00:17:32.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.079 Nvme0n1 : 6.00 14817.17 57.88 0.00 0.00 0.00 0.00 0.00 00:17:32.079 =================================================================================================================== 00:17:32.079 Total : 14817.17 57.88 0.00 0.00 0.00 0.00 0.00 00:17:32.079 00:17:33.021 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.021 Nvme0n1 : 7.00 14858.14 58.04 0.00 0.00 0.00 0.00 0.00 00:17:33.021 =================================================================================================================== 00:17:33.021 Total : 14858.14 58.04 0.00 0.00 0.00 0.00 0.00 00:17:33.021 00:17:33.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.956 Nvme0n1 : 8.00 14897.00 58.19 0.00 0.00 0.00 0.00 0.00 00:17:33.956 =================================================================================================================== 00:17:33.956 Total : 14897.00 58.19 0.00 0.00 0.00 0.00 0.00 00:17:33.956 00:17:34.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.896 Nvme0n1 : 9.00 14927.00 58.31 0.00 0.00 0.00 0.00 0.00 00:17:34.896 =================================================================================================================== 00:17:34.896 Total : 14927.00 58.31 0.00 0.00 0.00 0.00 0.00 00:17:34.896 00:17:35.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.833 Nvme0n1 : 10.00 14951.20 58.40 0.00 0.00 0.00 0.00 0.00 00:17:35.833 =================================================================================================================== 00:17:35.833 Total : 14951.20 58.40 0.00 0.00 0.00 0.00 0.00 00:17:35.833 00:17:35.833 00:17:35.833 Latency(us) 00:17:35.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.833 Nvme0n1 : 10.01 14951.62 58.40 0.00 0.00 8554.82 2148.12 13495.56 00:17:35.833 =================================================================================================================== 00:17:35.833 Total : 14951.62 58.40 0.00 0.00 8554.82 2148.12 13495.56 00:17:35.833 0 00:17:35.833 01:38:48 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3770215 00:17:35.833 01:38:48 -- common/autotest_common.sh@926 -- # '[' -z 3770215 ']' 00:17:35.833 01:38:48 -- common/autotest_common.sh@930 -- # kill -0 3770215 00:17:35.833 01:38:48 -- common/autotest_common.sh@931 -- # uname 00:17:35.833 01:38:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:35.833 01:38:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3770215 00:17:36.091 01:38:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:36.091 01:38:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:36.091 01:38:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3770215' 00:17:36.091 killing process with pid 3770215 00:17:36.091 01:38:48 -- common/autotest_common.sh@945 -- # kill 3770215 00:17:36.091 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.091 00:17:36.091 Latency(us) 00:17:36.092 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.092 =================================================================================================================== 00:17:36.092 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.092 01:38:48 -- common/autotest_common.sh@950 -- # wait 3770215 00:17:36.092 01:38:49 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:36.659 01:38:49 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:36.659 01:38:49 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:36.659 01:38:49 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:36.659 01:38:49 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:36.659 01:38:49 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.918 [2024-07-23 01:38:49.903339] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:36.918 01:38:49 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:36.918 01:38:49 -- common/autotest_common.sh@640 -- # local es=0 00:17:36.918 01:38:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:36.918 01:38:49 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.918 01:38:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.918 01:38:49 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.918 01:38:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.918 01:38:49 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.918 01:38:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.918 01:38:49 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.918 01:38:49 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:36.918 01:38:49 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:37.178 request: 00:17:37.178 { 00:17:37.178 "uuid": "33bbf6bb-a998-4543-981e-6219dee3a995", 00:17:37.178 "method": "bdev_lvol_get_lvstores", 00:17:37.178 "req_id": 1 00:17:37.178 } 00:17:37.178 Got JSON-RPC error response 00:17:37.178 response: 00:17:37.178 { 00:17:37.178 "code": -19, 00:17:37.178 "message": "No such device" 00:17:37.178 } 00:17:37.178 01:38:50 -- common/autotest_common.sh@643 -- # es=1 00:17:37.178 01:38:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:37.178 01:38:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:37.178 01:38:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:37.178 01:38:50 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.437 aio_bdev 00:17:37.437 01:38:50 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev b41a777d-4f3b-424f-b597-9702cd14fa94 00:17:37.437 01:38:50 -- common/autotest_common.sh@887 -- # local bdev_name=b41a777d-4f3b-424f-b597-9702cd14fa94 00:17:37.437 01:38:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:37.437 01:38:50 -- common/autotest_common.sh@889 -- # local i 00:17:37.437 01:38:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:37.437 01:38:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:37.437 01:38:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:37.695 01:38:50 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b41a777d-4f3b-424f-b597-9702cd14fa94 -t 2000 00:17:37.953 [ 00:17:37.953 { 00:17:37.953 "name": "b41a777d-4f3b-424f-b597-9702cd14fa94", 00:17:37.953 "aliases": [ 00:17:37.953 "lvs/lvol" 00:17:37.953 ], 00:17:37.953 "product_name": "Logical Volume", 00:17:37.953 "block_size": 4096, 00:17:37.953 "num_blocks": 38912, 00:17:37.953 "uuid": "b41a777d-4f3b-424f-b597-9702cd14fa94", 00:17:37.953 "assigned_rate_limits": { 00:17:37.953 "rw_ios_per_sec": 0, 00:17:37.953 "rw_mbytes_per_sec": 0, 00:17:37.953 "r_mbytes_per_sec": 0, 00:17:37.953 "w_mbytes_per_sec": 0 00:17:37.953 }, 00:17:37.953 "claimed": false, 00:17:37.953 "zoned": false, 00:17:37.953 "supported_io_types": { 00:17:37.953 "read": true, 00:17:37.953 "write": true, 00:17:37.953 "unmap": true, 00:17:37.953 "write_zeroes": true, 00:17:37.953 "flush": false, 00:17:37.953 "reset": true, 00:17:37.953 "compare": false, 00:17:37.953 "compare_and_write": false, 00:17:37.953 "abort": false, 00:17:37.953 "nvme_admin": false, 00:17:37.953 "nvme_io": false 00:17:37.953 }, 00:17:37.953 "driver_specific": { 00:17:37.953 "lvol": { 00:17:37.953 "lvol_store_uuid": "33bbf6bb-a998-4543-981e-6219dee3a995", 00:17:37.953 "base_bdev": "aio_bdev", 00:17:37.953 "thin_provision": false, 00:17:37.953 "snapshot": false, 00:17:37.953 "clone": false, 00:17:37.953 "esnap_clone": false 00:17:37.953 } 00:17:37.953 } 00:17:37.953 } 00:17:37.953 ] 00:17:37.953 01:38:50 -- common/autotest_common.sh@895 -- # return 0 00:17:37.953 01:38:50 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:37.953 01:38:50 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:38.213 01:38:51 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:38.213 01:38:51 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:38.213 01:38:51 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:38.473 01:38:51 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:38.473 01:38:51 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b41a777d-4f3b-424f-b597-9702cd14fa94 00:17:38.733 01:38:51 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 33bbf6bb-a998-4543-981e-6219dee3a995 00:17:38.993 01:38:51 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.252 00:17:39.252 real 0m17.632s 00:17:39.252 user 0m17.058s 00:17:39.252 sys 0m1.950s 00:17:39.252 01:38:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.252 01:38:52 -- common/autotest_common.sh@10 -- # set +x 00:17:39.252 ************************************ 00:17:39.252 END TEST lvs_grow_clean 00:17:39.252 ************************************ 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:39.252 01:38:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:39.252 01:38:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.252 01:38:52 -- common/autotest_common.sh@10 -- # set +x 00:17:39.252 ************************************ 00:17:39.252 START TEST lvs_grow_dirty 00:17:39.252 ************************************ 00:17:39.252 01:38:52 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.252 01:38:52 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.510 01:38:52 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:39.510 01:38:52 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:39.768 01:38:52 -- target/nvmf_lvs_grow.sh@28 -- # lvs=298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:39.768 01:38:52 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:39.768 01:38:52 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:40.027 01:38:52 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:40.027 01:38:52 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:40.027 01:38:52 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 298f9331-b5c0-41da-a3eb-5949c4a30510 lvol 150 00:17:40.287 01:38:53 -- target/nvmf_lvs_grow.sh@33 -- # lvol=4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:40.287 01:38:53 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.287 01:38:53 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:40.547 [2024-07-23 01:38:53.407861] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:40.547 [2024-07-23 01:38:53.407956] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:40.547 true 00:17:40.547 01:38:53 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:40.547 01:38:53 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:40.805 01:38:53 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:40.805 01:38:53 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:41.064 01:38:53 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:41.064 01:38:54 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:41.323 01:38:54 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:41.581 01:38:54 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3772322 00:17:41.581 01:38:54 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:41.581 01:38:54 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.581 01:38:54 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3772322 /var/tmp/bdevperf.sock 00:17:41.581 01:38:54 -- common/autotest_common.sh@819 -- # '[' -z 3772322 ']' 00:17:41.581 01:38:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.581 01:38:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:41.581 01:38:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.581 01:38:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:41.581 01:38:54 -- common/autotest_common.sh@10 -- # set +x 00:17:41.581 [2024-07-23 01:38:54.661192] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:41.581 [2024-07-23 01:38:54.661262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3772322 ] 00:17:41.840 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.840 [2024-07-23 01:38:54.722664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.840 [2024-07-23 01:38:54.812338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.774 01:38:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:42.774 01:38:55 -- common/autotest_common.sh@852 -- # return 0 00:17:42.774 01:38:55 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:43.033 Nvme0n1 00:17:43.033 01:38:55 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:43.291 [ 00:17:43.291 { 00:17:43.291 "name": "Nvme0n1", 00:17:43.291 "aliases": [ 00:17:43.291 "4c1290fa-848d-470b-89c6-38dbfae3037f" 00:17:43.291 ], 00:17:43.291 "product_name": "NVMe disk", 00:17:43.291 "block_size": 4096, 00:17:43.291 "num_blocks": 38912, 00:17:43.291 "uuid": "4c1290fa-848d-470b-89c6-38dbfae3037f", 00:17:43.291 "assigned_rate_limits": { 00:17:43.291 "rw_ios_per_sec": 0, 00:17:43.291 "rw_mbytes_per_sec": 0, 00:17:43.291 "r_mbytes_per_sec": 0, 00:17:43.291 "w_mbytes_per_sec": 0 00:17:43.291 }, 00:17:43.291 "claimed": false, 00:17:43.291 "zoned": false, 00:17:43.291 "supported_io_types": { 00:17:43.291 "read": true, 00:17:43.291 "write": true, 00:17:43.291 "unmap": true, 00:17:43.291 "write_zeroes": true, 00:17:43.291 "flush": true, 00:17:43.291 "reset": true, 00:17:43.291 "compare": true, 00:17:43.291 "compare_and_write": true, 00:17:43.291 "abort": true, 00:17:43.291 "nvme_admin": true, 00:17:43.291 "nvme_io": true 00:17:43.291 }, 00:17:43.291 "driver_specific": { 00:17:43.291 "nvme": [ 00:17:43.291 { 00:17:43.291 "trid": { 00:17:43.291 "trtype": "TCP", 00:17:43.291 "adrfam": "IPv4", 00:17:43.291 "traddr": "10.0.0.2", 00:17:43.291 "trsvcid": "4420", 00:17:43.291 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.291 }, 00:17:43.291 "ctrlr_data": { 00:17:43.291 "cntlid": 1, 00:17:43.291 "vendor_id": "0x8086", 00:17:43.291 "model_number": "SPDK bdev Controller", 00:17:43.291 "serial_number": "SPDK0", 00:17:43.291 "firmware_revision": "24.01.1", 00:17:43.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.291 "oacs": { 00:17:43.291 "security": 0, 00:17:43.291 "format": 0, 00:17:43.291 "firmware": 0, 00:17:43.291 "ns_manage": 0 00:17:43.291 }, 00:17:43.291 "multi_ctrlr": true, 00:17:43.291 "ana_reporting": false 00:17:43.291 }, 00:17:43.291 "vs": { 00:17:43.291 "nvme_version": "1.3" 00:17:43.291 }, 00:17:43.291 "ns_data": { 00:17:43.291 "id": 1, 00:17:43.291 "can_share": true 00:17:43.291 } 00:17:43.291 } 00:17:43.291 ], 00:17:43.292 "mp_policy": "active_passive" 00:17:43.292 } 00:17:43.292 } 00:17:43.292 ] 00:17:43.292 01:38:56 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3772542 00:17:43.292 01:38:56 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:43.292 01:38:56 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.292 Running I/O for 10 seconds... 00:17:44.227 Latency(us) 00:17:44.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.227 Nvme0n1 : 1.00 14280.00 55.78 0.00 0.00 0.00 0.00 0.00 00:17:44.227 =================================================================================================================== 00:17:44.227 Total : 14280.00 55.78 0.00 0.00 0.00 0.00 0.00 00:17:44.227 00:17:45.221 01:38:58 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:45.221 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.221 Nvme0n1 : 2.00 14465.00 56.50 0.00 0.00 0.00 0.00 0.00 00:17:45.221 =================================================================================================================== 00:17:45.221 Total : 14465.00 56.50 0.00 0.00 0.00 0.00 0.00 00:17:45.221 00:17:45.478 true 00:17:45.478 01:38:58 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:45.478 01:38:58 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:45.737 01:38:58 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:45.737 01:38:58 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:45.737 01:38:58 -- target/nvmf_lvs_grow.sh@65 -- # wait 3772542 00:17:46.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.306 Nvme0n1 : 3.00 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:46.306 =================================================================================================================== 00:17:46.306 Total : 14678.00 57.34 0.00 0.00 0.00 0.00 0.00 00:17:46.306 00:17:47.241 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.241 Nvme0n1 : 4.00 14720.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:47.241 =================================================================================================================== 00:17:47.241 Total : 14720.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:47.241 00:17:48.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.616 Nvme0n1 : 5.00 14759.80 57.66 0.00 0.00 0.00 0.00 0.00 00:17:48.616 =================================================================================================================== 00:17:48.616 Total : 14759.80 57.66 0.00 0.00 0.00 0.00 0.00 00:17:48.616 00:17:49.557 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:49.557 Nvme0n1 : 6.00 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:17:49.557 =================================================================================================================== 00:17:49.557 Total : 14860.00 58.05 0.00 0.00 0.00 0.00 0.00 00:17:49.557 00:17:50.494 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.494 Nvme0n1 : 7.00 14894.86 58.18 0.00 0.00 0.00 0.00 0.00 00:17:50.494 =================================================================================================================== 00:17:50.494 Total : 14894.86 58.18 0.00 0.00 0.00 0.00 0.00 00:17:50.494 00:17:51.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.430 Nvme0n1 : 8.00 14921.00 58.29 0.00 0.00 0.00 0.00 0.00 00:17:51.430 =================================================================================================================== 00:17:51.430 Total : 14921.00 58.29 0.00 0.00 0.00 0.00 0.00 00:17:51.430 00:17:52.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.369 Nvme0n1 : 9.00 14941.33 58.36 0.00 0.00 0.00 0.00 0.00 00:17:52.369 =================================================================================================================== 00:17:52.369 Total : 14941.33 58.36 0.00 0.00 0.00 0.00 0.00 00:17:52.369 00:17:53.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.307 Nvme0n1 : 10.00 14963.90 58.45 0.00 0.00 0.00 0.00 0.00 00:17:53.307 =================================================================================================================== 00:17:53.307 Total : 14963.90 58.45 0.00 0.00 0.00 0.00 0.00 00:17:53.307 00:17:53.307 00:17:53.307 Latency(us) 00:17:53.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.307 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.307 Nvme0n1 : 10.01 14965.66 58.46 0.00 0.00 8547.25 4757.43 18544.26 00:17:53.307 =================================================================================================================== 00:17:53.307 Total : 14965.66 58.46 0.00 0.00 8547.25 4757.43 18544.26 00:17:53.307 0 00:17:53.307 01:39:06 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3772322 00:17:53.307 01:39:06 -- common/autotest_common.sh@926 -- # '[' -z 3772322 ']' 00:17:53.307 01:39:06 -- common/autotest_common.sh@930 -- # kill -0 3772322 00:17:53.307 01:39:06 -- common/autotest_common.sh@931 -- # uname 00:17:53.307 01:39:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:53.307 01:39:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3772322 00:17:53.307 01:39:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:53.307 01:39:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:53.307 01:39:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3772322' 00:17:53.307 killing process with pid 3772322 00:17:53.307 01:39:06 -- common/autotest_common.sh@945 -- # kill 3772322 00:17:53.307 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.307 00:17:53.307 Latency(us) 00:17:53.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.307 =================================================================================================================== 00:17:53.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.307 01:39:06 -- common/autotest_common.sh@950 -- # wait 3772322 00:17:53.565 01:39:06 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:53.823 01:39:06 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:53.823 01:39:06 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3769635 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@74 -- # wait 3769635 00:17:54.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3769635 Killed "${NVMF_APP[@]}" "$@" 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:54.082 01:39:07 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:54.082 01:39:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:54.082 01:39:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.082 01:39:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.082 01:39:07 -- nvmf/common.sh@469 -- # nvmfpid=3773951 00:17:54.082 01:39:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:54.082 01:39:07 -- nvmf/common.sh@470 -- # waitforlisten 3773951 00:17:54.082 01:39:07 -- common/autotest_common.sh@819 -- # '[' -z 3773951 ']' 00:17:54.082 01:39:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.082 01:39:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.082 01:39:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.082 01:39:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.082 01:39:07 -- common/autotest_common.sh@10 -- # set +x 00:17:54.340 [2024-07-23 01:39:07.210347] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:54.340 [2024-07-23 01:39:07.210435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.340 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.340 [2024-07-23 01:39:07.285339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.340 [2024-07-23 01:39:07.369199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.341 [2024-07-23 01:39:07.369344] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.341 [2024-07-23 01:39:07.369360] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.341 [2024-07-23 01:39:07.369373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.341 [2024-07-23 01:39:07.369400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.276 01:39:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.276 01:39:08 -- common/autotest_common.sh@852 -- # return 0 00:17:55.276 01:39:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:55.276 01:39:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:55.276 01:39:08 -- common/autotest_common.sh@10 -- # set +x 00:17:55.276 01:39:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.276 01:39:08 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:55.534 [2024-07-23 01:39:08.407280] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:55.534 [2024-07-23 01:39:08.407438] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:55.534 [2024-07-23 01:39:08.407495] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:55.534 01:39:08 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:55.534 01:39:08 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:55.534 01:39:08 -- common/autotest_common.sh@887 -- # local bdev_name=4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:55.534 01:39:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:55.534 01:39:08 -- common/autotest_common.sh@889 -- # local i 00:17:55.534 01:39:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:55.534 01:39:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:55.534 01:39:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:55.793 01:39:08 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4c1290fa-848d-470b-89c6-38dbfae3037f -t 2000 00:17:56.051 [ 00:17:56.051 { 00:17:56.051 "name": "4c1290fa-848d-470b-89c6-38dbfae3037f", 00:17:56.051 "aliases": [ 00:17:56.051 "lvs/lvol" 00:17:56.051 ], 00:17:56.051 "product_name": "Logical Volume", 00:17:56.051 "block_size": 4096, 00:17:56.051 "num_blocks": 38912, 00:17:56.051 "uuid": "4c1290fa-848d-470b-89c6-38dbfae3037f", 00:17:56.051 "assigned_rate_limits": { 00:17:56.051 "rw_ios_per_sec": 0, 00:17:56.051 "rw_mbytes_per_sec": 0, 00:17:56.051 "r_mbytes_per_sec": 0, 00:17:56.051 "w_mbytes_per_sec": 0 00:17:56.051 }, 00:17:56.051 "claimed": false, 00:17:56.051 "zoned": false, 00:17:56.051 "supported_io_types": { 00:17:56.051 "read": true, 00:17:56.051 "write": true, 00:17:56.051 "unmap": true, 00:17:56.051 "write_zeroes": true, 00:17:56.051 "flush": false, 00:17:56.051 "reset": true, 00:17:56.051 "compare": false, 00:17:56.051 "compare_and_write": false, 00:17:56.051 "abort": false, 00:17:56.051 "nvme_admin": false, 00:17:56.051 "nvme_io": false 00:17:56.051 }, 00:17:56.051 "driver_specific": { 00:17:56.051 "lvol": { 00:17:56.051 "lvol_store_uuid": "298f9331-b5c0-41da-a3eb-5949c4a30510", 00:17:56.051 "base_bdev": "aio_bdev", 00:17:56.051 "thin_provision": false, 00:17:56.051 "snapshot": false, 00:17:56.051 "clone": false, 00:17:56.051 "esnap_clone": false 00:17:56.051 } 00:17:56.051 } 00:17:56.051 } 00:17:56.051 ] 00:17:56.051 01:39:08 -- common/autotest_common.sh@895 -- # return 0 00:17:56.051 01:39:08 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:56.051 01:39:08 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:56.310 01:39:09 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:56.310 01:39:09 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:56.310 01:39:09 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:56.310 01:39:09 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:56.310 01:39:09 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:56.571 [2024-07-23 01:39:09.611982] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:56.571 01:39:09 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:56.571 01:39:09 -- common/autotest_common.sh@640 -- # local es=0 00:17:56.571 01:39:09 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:56.571 01:39:09 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.571 01:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.571 01:39:09 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.571 01:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.571 01:39:09 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.571 01:39:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:56.571 01:39:09 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:56.571 01:39:09 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:56.571 01:39:09 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:56.831 request: 00:17:56.831 { 00:17:56.831 "uuid": "298f9331-b5c0-41da-a3eb-5949c4a30510", 00:17:56.831 "method": "bdev_lvol_get_lvstores", 00:17:56.831 "req_id": 1 00:17:56.831 } 00:17:56.831 Got JSON-RPC error response 00:17:56.831 response: 00:17:56.831 { 00:17:56.831 "code": -19, 00:17:56.831 "message": "No such device" 00:17:56.831 } 00:17:56.831 01:39:09 -- common/autotest_common.sh@643 -- # es=1 00:17:56.831 01:39:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:56.831 01:39:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:56.831 01:39:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:56.831 01:39:09 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:57.092 aio_bdev 00:17:57.092 01:39:10 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:57.092 01:39:10 -- common/autotest_common.sh@887 -- # local bdev_name=4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:57.092 01:39:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.092 01:39:10 -- common/autotest_common.sh@889 -- # local i 00:17:57.092 01:39:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.092 01:39:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.092 01:39:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:57.351 01:39:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4c1290fa-848d-470b-89c6-38dbfae3037f -t 2000 00:17:57.610 [ 00:17:57.610 { 00:17:57.610 "name": "4c1290fa-848d-470b-89c6-38dbfae3037f", 00:17:57.610 "aliases": [ 00:17:57.610 "lvs/lvol" 00:17:57.610 ], 00:17:57.610 "product_name": "Logical Volume", 00:17:57.610 "block_size": 4096, 00:17:57.610 "num_blocks": 38912, 00:17:57.610 "uuid": "4c1290fa-848d-470b-89c6-38dbfae3037f", 00:17:57.610 "assigned_rate_limits": { 00:17:57.610 "rw_ios_per_sec": 0, 00:17:57.610 "rw_mbytes_per_sec": 0, 00:17:57.610 "r_mbytes_per_sec": 0, 00:17:57.610 "w_mbytes_per_sec": 0 00:17:57.610 }, 00:17:57.610 "claimed": false, 00:17:57.610 "zoned": false, 00:17:57.610 "supported_io_types": { 00:17:57.610 "read": true, 00:17:57.610 "write": true, 00:17:57.610 "unmap": true, 00:17:57.610 "write_zeroes": true, 00:17:57.610 "flush": false, 00:17:57.610 "reset": true, 00:17:57.610 "compare": false, 00:17:57.610 "compare_and_write": false, 00:17:57.610 "abort": false, 00:17:57.610 "nvme_admin": false, 00:17:57.610 "nvme_io": false 00:17:57.610 }, 00:17:57.610 "driver_specific": { 00:17:57.610 "lvol": { 00:17:57.610 "lvol_store_uuid": "298f9331-b5c0-41da-a3eb-5949c4a30510", 00:17:57.610 "base_bdev": "aio_bdev", 00:17:57.610 "thin_provision": false, 00:17:57.610 "snapshot": false, 00:17:57.610 "clone": false, 00:17:57.610 "esnap_clone": false 00:17:57.610 } 00:17:57.610 } 00:17:57.610 } 00:17:57.610 ] 00:17:57.610 01:39:10 -- common/autotest_common.sh@895 -- # return 0 00:17:57.610 01:39:10 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:57.610 01:39:10 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:57.868 01:39:10 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:57.868 01:39:10 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:57.868 01:39:10 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:58.128 01:39:11 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:58.128 01:39:11 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4c1290fa-848d-470b-89c6-38dbfae3037f 00:17:58.388 01:39:11 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 298f9331-b5c0-41da-a3eb-5949c4a30510 00:17:58.646 01:39:11 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:58.905 01:39:11 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:58.905 00:17:58.905 real 0m19.688s 00:17:58.905 user 0m49.212s 00:17:58.905 sys 0m4.940s 00:17:58.905 01:39:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.905 01:39:11 -- common/autotest_common.sh@10 -- # set +x 00:17:58.905 ************************************ 00:17:58.905 END TEST lvs_grow_dirty 00:17:58.905 ************************************ 00:17:58.905 01:39:11 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:58.905 01:39:11 -- common/autotest_common.sh@796 -- # type=--id 00:17:58.905 01:39:11 -- common/autotest_common.sh@797 -- # id=0 00:17:58.905 01:39:11 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:58.905 01:39:11 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:58.905 01:39:11 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:58.905 01:39:11 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:58.905 01:39:11 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:58.905 01:39:11 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:58.905 nvmf_trace.0 00:17:58.905 01:39:11 -- common/autotest_common.sh@811 -- # return 0 00:17:58.905 01:39:11 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:58.905 01:39:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.905 01:39:11 -- nvmf/common.sh@116 -- # sync 00:17:58.905 01:39:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.905 01:39:11 -- nvmf/common.sh@119 -- # set +e 00:17:58.905 01:39:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.905 01:39:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.905 rmmod nvme_tcp 00:17:58.905 rmmod nvme_fabrics 00:17:58.905 rmmod nvme_keyring 00:17:58.905 01:39:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.905 01:39:11 -- nvmf/common.sh@123 -- # set -e 00:17:58.905 01:39:11 -- nvmf/common.sh@124 -- # return 0 00:17:58.905 01:39:11 -- nvmf/common.sh@477 -- # '[' -n 3773951 ']' 00:17:58.905 01:39:11 -- nvmf/common.sh@478 -- # killprocess 3773951 00:17:58.905 01:39:11 -- common/autotest_common.sh@926 -- # '[' -z 3773951 ']' 00:17:58.905 01:39:11 -- common/autotest_common.sh@930 -- # kill -0 3773951 00:17:58.905 01:39:11 -- common/autotest_common.sh@931 -- # uname 00:17:58.905 01:39:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.905 01:39:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3773951 00:17:58.905 01:39:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:58.905 01:39:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:58.905 01:39:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3773951' 00:17:58.905 killing process with pid 3773951 00:17:58.905 01:39:11 -- common/autotest_common.sh@945 -- # kill 3773951 00:17:58.905 01:39:11 -- common/autotest_common.sh@950 -- # wait 3773951 00:17:59.163 01:39:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.163 01:39:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.163 01:39:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.163 01:39:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.163 01:39:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.163 01:39:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.163 01:39:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.163 01:39:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.738 01:39:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:01.738 00:18:01.738 real 0m43.161s 00:18:01.738 user 1m12.548s 00:18:01.738 sys 0m8.679s 00:18:01.738 01:39:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:01.738 01:39:14 -- common/autotest_common.sh@10 -- # set +x 00:18:01.738 ************************************ 00:18:01.738 END TEST nvmf_lvs_grow 00:18:01.738 ************************************ 00:18:01.738 01:39:14 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:01.738 01:39:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:01.738 01:39:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:01.738 01:39:14 -- common/autotest_common.sh@10 -- # set +x 00:18:01.738 ************************************ 00:18:01.738 START TEST nvmf_bdev_io_wait 00:18:01.738 ************************************ 00:18:01.738 01:39:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:01.738 * Looking for test storage... 00:18:01.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.738 01:39:14 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.738 01:39:14 -- nvmf/common.sh@7 -- # uname -s 00:18:01.738 01:39:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.738 01:39:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.738 01:39:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.738 01:39:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.738 01:39:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.738 01:39:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.738 01:39:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.738 01:39:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.738 01:39:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.738 01:39:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.738 01:39:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.738 01:39:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.738 01:39:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.738 01:39:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.738 01:39:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.738 01:39:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.738 01:39:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.738 01:39:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.738 01:39:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.738 01:39:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.738 01:39:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.739 01:39:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.739 01:39:14 -- paths/export.sh@5 -- # export PATH 00:18:01.739 01:39:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.739 01:39:14 -- nvmf/common.sh@46 -- # : 0 00:18:01.739 01:39:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:01.739 01:39:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:01.739 01:39:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:01.739 01:39:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.739 01:39:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.739 01:39:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:01.739 01:39:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:01.739 01:39:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:01.739 01:39:14 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:01.739 01:39:14 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:01.739 01:39:14 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:01.739 01:39:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:01.739 01:39:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.739 01:39:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:01.739 01:39:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:01.739 01:39:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:01.739 01:39:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.739 01:39:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.739 01:39:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.739 01:39:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:01.739 01:39:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:01.739 01:39:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:01.739 01:39:14 -- common/autotest_common.sh@10 -- # set +x 00:18:03.117 01:39:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:03.117 01:39:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:03.117 01:39:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:03.117 01:39:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:03.117 01:39:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:03.117 01:39:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:03.117 01:39:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:03.117 01:39:16 -- nvmf/common.sh@294 -- # net_devs=() 00:18:03.117 01:39:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:03.117 01:39:16 -- nvmf/common.sh@295 -- # e810=() 00:18:03.117 01:39:16 -- nvmf/common.sh@295 -- # local -ga e810 00:18:03.117 01:39:16 -- nvmf/common.sh@296 -- # x722=() 00:18:03.117 01:39:16 -- nvmf/common.sh@296 -- # local -ga x722 00:18:03.117 01:39:16 -- nvmf/common.sh@297 -- # mlx=() 00:18:03.117 01:39:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:03.117 01:39:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.117 01:39:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:03.117 01:39:16 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:03.117 01:39:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.117 01:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:03.117 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:03.117 01:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:03.117 01:39:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:03.117 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:03.117 01:39:16 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.117 01:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.117 01:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.117 01:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:03.117 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:03.117 01:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.117 01:39:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:03.117 01:39:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.117 01:39:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.117 01:39:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:03.117 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:03.117 01:39:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.117 01:39:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:03.117 01:39:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:03.117 01:39:16 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:03.117 01:39:16 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.117 01:39:16 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.117 01:39:16 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.117 01:39:16 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:03.117 01:39:16 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.117 01:39:16 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.117 01:39:16 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:03.117 01:39:16 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.117 01:39:16 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.117 01:39:16 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:03.117 01:39:16 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:03.117 01:39:16 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.117 01:39:16 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.117 01:39:16 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.117 01:39:16 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.376 01:39:16 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:03.376 01:39:16 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.376 01:39:16 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.376 01:39:16 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.376 01:39:16 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:03.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:18:03.376 00:18:03.376 --- 10.0.0.2 ping statistics --- 00:18:03.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.376 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:03.376 01:39:16 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:18:03.376 00:18:03.376 --- 10.0.0.1 ping statistics --- 00:18:03.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.376 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:03.376 01:39:16 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.376 01:39:16 -- nvmf/common.sh@410 -- # return 0 00:18:03.376 01:39:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:03.376 01:39:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.376 01:39:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:03.376 01:39:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:03.376 01:39:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.376 01:39:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:03.376 01:39:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:03.376 01:39:16 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:03.376 01:39:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:03.376 01:39:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:03.376 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 01:39:16 -- nvmf/common.sh@469 -- # nvmfpid=3777011 00:18:03.376 01:39:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:03.376 01:39:16 -- nvmf/common.sh@470 -- # waitforlisten 3777011 00:18:03.376 01:39:16 -- common/autotest_common.sh@819 -- # '[' -z 3777011 ']' 00:18:03.376 01:39:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.376 01:39:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:03.376 01:39:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.376 01:39:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:03.376 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.376 [2024-07-23 01:39:16.356424] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:03.376 [2024-07-23 01:39:16.356511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.376 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.376 [2024-07-23 01:39:16.425489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:03.634 [2024-07-23 01:39:16.518328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:03.634 [2024-07-23 01:39:16.518481] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.634 [2024-07-23 01:39:16.518500] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.634 [2024-07-23 01:39:16.518514] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.634 [2024-07-23 01:39:16.518583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:03.634 [2024-07-23 01:39:16.518633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.634 [2024-07-23 01:39:16.518664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:03.634 [2024-07-23 01:39:16.518667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.634 01:39:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:03.634 01:39:16 -- common/autotest_common.sh@852 -- # return 0 00:18:03.634 01:39:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:03.634 01:39:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:03.634 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.634 01:39:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.634 01:39:16 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:03.634 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.634 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.634 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.634 01:39:16 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:03.634 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:03.635 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 [2024-07-23 01:39:16.652359] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:03.635 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 Malloc0 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:03.635 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:03.635 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:03.635 01:39:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:03.635 01:39:16 -- common/autotest_common.sh@10 -- # set +x 00:18:03.635 [2024-07-23 01:39:16.714981] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:03.635 01:39:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3777159 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # config=() 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@30 -- # READ_PID=3777161 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.635 01:39:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.635 { 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme$subsystem", 00:18:03.635 "trtype": "$TEST_TRANSPORT", 00:18:03.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "$NVMF_PORT", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.635 "hdgst": ${hdgst:-false}, 00:18:03.635 "ddgst": ${ddgst:-false} 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 } 00:18:03.635 EOF 00:18:03.635 )") 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3777163 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # config=() 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.635 01:39:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.635 { 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme$subsystem", 00:18:03.635 "trtype": "$TEST_TRANSPORT", 00:18:03.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "$NVMF_PORT", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.635 "hdgst": ${hdgst:-false}, 00:18:03.635 "ddgst": ${ddgst:-false} 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 } 00:18:03.635 EOF 00:18:03.635 )") 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # cat 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3777166 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # config=() 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@35 -- # sync 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.635 01:39:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.635 { 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme$subsystem", 00:18:03.635 "trtype": "$TEST_TRANSPORT", 00:18:03.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "$NVMF_PORT", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.635 "hdgst": ${hdgst:-false}, 00:18:03.635 "ddgst": ${ddgst:-false} 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 } 00:18:03.635 EOF 00:18:03.635 )") 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # cat 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # config=() 00:18:03.635 01:39:16 -- nvmf/common.sh@520 -- # local subsystem config 00:18:03.635 01:39:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:03.635 { 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme$subsystem", 00:18:03.635 "trtype": "$TEST_TRANSPORT", 00:18:03.635 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "$NVMF_PORT", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:03.635 "hdgst": ${hdgst:-false}, 00:18:03.635 "ddgst": ${ddgst:-false} 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 } 00:18:03.635 EOF 00:18:03.635 )") 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # cat 00:18:03.635 01:39:16 -- target/bdev_io_wait.sh@37 -- # wait 3777159 00:18:03.635 01:39:16 -- nvmf/common.sh@542 -- # cat 00:18:03.635 01:39:16 -- nvmf/common.sh@544 -- # jq . 00:18:03.635 01:39:16 -- nvmf/common.sh@544 -- # jq . 00:18:03.635 01:39:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.635 01:39:16 -- nvmf/common.sh@544 -- # jq . 00:18:03.635 01:39:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme1", 00:18:03.635 "trtype": "tcp", 00:18:03.635 "traddr": "10.0.0.2", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "4420", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.635 "hdgst": false, 00:18:03.635 "ddgst": false 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 }' 00:18:03.635 01:39:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.635 01:39:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme1", 00:18:03.635 "trtype": "tcp", 00:18:03.635 "traddr": "10.0.0.2", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "4420", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.635 "hdgst": false, 00:18:03.635 "ddgst": false 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 }' 00:18:03.635 01:39:16 -- nvmf/common.sh@544 -- # jq . 00:18:03.635 01:39:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.635 01:39:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme1", 00:18:03.635 "trtype": "tcp", 00:18:03.635 "traddr": "10.0.0.2", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "4420", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.635 "hdgst": false, 00:18:03.635 "ddgst": false 00:18:03.635 }, 00:18:03.635 "method": "bdev_nvme_attach_controller" 00:18:03.635 }' 00:18:03.635 01:39:16 -- nvmf/common.sh@545 -- # IFS=, 00:18:03.635 01:39:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:03.635 "params": { 00:18:03.635 "name": "Nvme1", 00:18:03.635 "trtype": "tcp", 00:18:03.635 "traddr": "10.0.0.2", 00:18:03.635 "adrfam": "ipv4", 00:18:03.635 "trsvcid": "4420", 00:18:03.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.635 "hdgst": false, 00:18:03.635 "ddgst": false 00:18:03.635 }, 00:18:03.636 "method": "bdev_nvme_attach_controller" 00:18:03.636 }' 00:18:03.894 [2024-07-23 01:39:16.758376] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:03.894 [2024-07-23 01:39:16.758375] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:03.894 [2024-07-23 01:39:16.758375] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:03.894 [2024-07-23 01:39:16.758460] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 01:39:16.758460] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-23 01:39:16.758461] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:03.894 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:03.894 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:03.894 [2024-07-23 01:39:16.759477] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:03.894 [2024-07-23 01:39:16.759548] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:03.894 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.894 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.894 [2024-07-23 01:39:16.931068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.162 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.162 [2024-07-23 01:39:17.003779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:04.162 [2024-07-23 01:39:17.031779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.162 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.162 [2024-07-23 01:39:17.105900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:04.162 [2024-07-23 01:39:17.131580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.162 [2024-07-23 01:39:17.207802] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.162 [2024-07-23 01:39:17.208748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:04.423 [2024-07-23 01:39:17.275268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:04.423 Running I/O for 1 seconds... 00:18:04.423 Running I/O for 1 seconds... 00:18:04.423 Running I/O for 1 seconds... 00:18:04.682 Running I/O for 1 seconds... 00:18:05.618 00:18:05.618 Latency(us) 00:18:05.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.618 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:05.618 Nvme1n1 : 1.02 6525.26 25.49 0.00 0.00 19457.44 8398.32 27962.03 00:18:05.618 =================================================================================================================== 00:18:05.618 Total : 6525.26 25.49 0.00 0.00 19457.44 8398.32 27962.03 00:18:05.618 00:18:05.618 Latency(us) 00:18:05.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.618 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:05.618 Nvme1n1 : 1.01 9932.04 38.80 0.00 0.00 12833.24 7524.50 26020.22 00:18:05.618 =================================================================================================================== 00:18:05.618 Total : 9932.04 38.80 0.00 0.00 12833.24 7524.50 26020.22 00:18:05.618 00:18:05.618 Latency(us) 00:18:05.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.618 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:05.618 Nvme1n1 : 1.01 6371.07 24.89 0.00 0.00 20027.34 5704.06 42913.94 00:18:05.618 =================================================================================================================== 00:18:05.618 Total : 6371.07 24.89 0.00 0.00 20027.34 5704.06 42913.94 00:18:05.618 00:18:05.618 Latency(us) 00:18:05.618 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.618 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:05.618 Nvme1n1 : 1.00 200955.47 784.98 0.00 0.00 634.57 271.55 764.59 00:18:05.618 =================================================================================================================== 00:18:05.618 Total : 200955.47 784.98 0.00 0.00 634.57 271.55 764.59 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@38 -- # wait 3777161 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@39 -- # wait 3777163 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@40 -- # wait 3777166 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.877 01:39:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.877 01:39:18 -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 01:39:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:05.877 01:39:18 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:05.877 01:39:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:05.877 01:39:18 -- nvmf/common.sh@116 -- # sync 00:18:05.877 01:39:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:05.877 01:39:18 -- nvmf/common.sh@119 -- # set +e 00:18:05.877 01:39:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:05.877 01:39:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:05.877 rmmod nvme_tcp 00:18:05.877 rmmod nvme_fabrics 00:18:05.877 rmmod nvme_keyring 00:18:05.877 01:39:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:05.877 01:39:18 -- nvmf/common.sh@123 -- # set -e 00:18:05.877 01:39:18 -- nvmf/common.sh@124 -- # return 0 00:18:05.877 01:39:18 -- nvmf/common.sh@477 -- # '[' -n 3777011 ']' 00:18:05.877 01:39:18 -- nvmf/common.sh@478 -- # killprocess 3777011 00:18:05.877 01:39:18 -- common/autotest_common.sh@926 -- # '[' -z 3777011 ']' 00:18:05.877 01:39:18 -- common/autotest_common.sh@930 -- # kill -0 3777011 00:18:05.877 01:39:18 -- common/autotest_common.sh@931 -- # uname 00:18:05.877 01:39:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:05.877 01:39:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3777011 00:18:05.877 01:39:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:05.877 01:39:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:05.877 01:39:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3777011' 00:18:05.877 killing process with pid 3777011 00:18:05.877 01:39:18 -- common/autotest_common.sh@945 -- # kill 3777011 00:18:05.877 01:39:18 -- common/autotest_common.sh@950 -- # wait 3777011 00:18:06.136 01:39:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:06.136 01:39:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:06.136 01:39:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:06.136 01:39:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.136 01:39:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:06.136 01:39:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.136 01:39:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.136 01:39:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.675 01:39:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:08.675 00:18:08.675 real 0m6.983s 00:18:08.675 user 0m15.771s 00:18:08.675 sys 0m3.447s 00:18:08.675 01:39:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.675 01:39:21 -- common/autotest_common.sh@10 -- # set +x 00:18:08.675 ************************************ 00:18:08.675 END TEST nvmf_bdev_io_wait 00:18:08.675 ************************************ 00:18:08.675 01:39:21 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:08.675 01:39:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:08.675 01:39:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:08.675 01:39:21 -- common/autotest_common.sh@10 -- # set +x 00:18:08.675 ************************************ 00:18:08.675 START TEST nvmf_queue_depth 00:18:08.675 ************************************ 00:18:08.675 01:39:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:08.675 * Looking for test storage... 00:18:08.675 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.675 01:39:21 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.675 01:39:21 -- nvmf/common.sh@7 -- # uname -s 00:18:08.676 01:39:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.676 01:39:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.676 01:39:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.676 01:39:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.676 01:39:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.676 01:39:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.676 01:39:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.676 01:39:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.676 01:39:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.676 01:39:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.676 01:39:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.676 01:39:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:08.676 01:39:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.676 01:39:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.676 01:39:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.676 01:39:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.676 01:39:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.676 01:39:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.676 01:39:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.676 01:39:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.676 01:39:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.676 01:39:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.676 01:39:21 -- paths/export.sh@5 -- # export PATH 00:18:08.676 01:39:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.676 01:39:21 -- nvmf/common.sh@46 -- # : 0 00:18:08.676 01:39:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:08.676 01:39:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:08.676 01:39:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:08.676 01:39:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.676 01:39:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.676 01:39:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:08.676 01:39:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:08.676 01:39:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:08.676 01:39:21 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:08.676 01:39:21 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:08.676 01:39:21 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:08.676 01:39:21 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:08.676 01:39:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:08.676 01:39:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.676 01:39:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:08.676 01:39:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:08.676 01:39:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:08.676 01:39:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.676 01:39:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.676 01:39:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.676 01:39:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:08.676 01:39:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:08.676 01:39:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:08.676 01:39:21 -- common/autotest_common.sh@10 -- # set +x 00:18:10.580 01:39:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:10.580 01:39:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:10.580 01:39:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:10.580 01:39:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:10.580 01:39:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:10.580 01:39:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:10.580 01:39:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:10.580 01:39:23 -- nvmf/common.sh@294 -- # net_devs=() 00:18:10.580 01:39:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:10.580 01:39:23 -- nvmf/common.sh@295 -- # e810=() 00:18:10.580 01:39:23 -- nvmf/common.sh@295 -- # local -ga e810 00:18:10.580 01:39:23 -- nvmf/common.sh@296 -- # x722=() 00:18:10.580 01:39:23 -- nvmf/common.sh@296 -- # local -ga x722 00:18:10.580 01:39:23 -- nvmf/common.sh@297 -- # mlx=() 00:18:10.580 01:39:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:10.580 01:39:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.580 01:39:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.581 01:39:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.581 01:39:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.581 01:39:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.581 01:39:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.581 01:39:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.581 01:39:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.581 01:39:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.581 01:39:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.581 01:39:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.581 01:39:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.581 01:39:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.581 01:39:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.581 01:39:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.581 01:39:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.581 01:39:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:10.581 01:39:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:10.581 01:39:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.581 01:39:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.581 01:39:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:10.581 01:39:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.581 01:39:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.581 01:39:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:10.581 01:39:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.581 01:39:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.581 01:39:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:10.581 01:39:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:10.581 01:39:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.581 01:39:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.581 01:39:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.581 01:39:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.581 01:39:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:10.581 01:39:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.581 01:39:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.581 01:39:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.581 01:39:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:10.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:18:10.581 00:18:10.581 --- 10.0.0.2 ping statistics --- 00:18:10.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.581 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:18:10.581 01:39:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:18:10.581 00:18:10.581 --- 10.0.0.1 ping statistics --- 00:18:10.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.581 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:18:10.581 01:39:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.581 01:39:23 -- nvmf/common.sh@410 -- # return 0 00:18:10.581 01:39:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:10.581 01:39:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.581 01:39:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:10.581 01:39:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.581 01:39:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:10.581 01:39:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:10.581 01:39:23 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:10.581 01:39:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:10.581 01:39:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:10.581 01:39:23 -- common/autotest_common.sh@10 -- # set +x 00:18:10.581 01:39:23 -- nvmf/common.sh@469 -- # nvmfpid=3779350 00:18:10.581 01:39:23 -- nvmf/common.sh@470 -- # waitforlisten 3779350 00:18:10.581 01:39:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:10.581 01:39:23 -- common/autotest_common.sh@819 -- # '[' -z 3779350 ']' 00:18:10.581 01:39:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.581 01:39:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:10.581 01:39:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.581 01:39:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:10.581 01:39:23 -- common/autotest_common.sh@10 -- # set +x 00:18:10.581 [2024-07-23 01:39:23.388938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:10.581 [2024-07-23 01:39:23.389035] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.581 EAL: No free 2048 kB hugepages reported on node 1 00:18:10.581 [2024-07-23 01:39:23.460397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.581 [2024-07-23 01:39:23.546729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:10.581 [2024-07-23 01:39:23.546888] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.581 [2024-07-23 01:39:23.546919] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.581 [2024-07-23 01:39:23.546931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.581 [2024-07-23 01:39:23.546982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:11.518 01:39:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:11.518 01:39:24 -- common/autotest_common.sh@852 -- # return 0 00:18:11.518 01:39:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:11.518 01:39:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 01:39:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:11.518 01:39:24 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:11.518 01:39:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 [2024-07-23 01:39:24.378825] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:11.518 01:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.518 01:39:24 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:11.518 01:39:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 Malloc0 00:18:11.518 01:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.518 01:39:24 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:11.518 01:39:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 01:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.518 01:39:24 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:11.518 01:39:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 01:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.518 01:39:24 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:11.518 01:39:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 [2024-07-23 01:39:24.439582] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:11.518 01:39:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:11.518 01:39:24 -- target/queue_depth.sh@30 -- # bdevperf_pid=3779483 00:18:11.518 01:39:24 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:11.518 01:39:24 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.518 01:39:24 -- target/queue_depth.sh@33 -- # waitforlisten 3779483 /var/tmp/bdevperf.sock 00:18:11.518 01:39:24 -- common/autotest_common.sh@819 -- # '[' -z 3779483 ']' 00:18:11.518 01:39:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.518 01:39:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.518 01:39:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.518 01:39:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.518 01:39:24 -- common/autotest_common.sh@10 -- # set +x 00:18:11.518 [2024-07-23 01:39:24.485247] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:11.519 [2024-07-23 01:39:24.485322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3779483 ] 00:18:11.519 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.519 [2024-07-23 01:39:24.550807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.778 [2024-07-23 01:39:24.639534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.345 01:39:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.604 01:39:25 -- common/autotest_common.sh@852 -- # return 0 00:18:12.604 01:39:25 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:12.604 01:39:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.604 01:39:25 -- common/autotest_common.sh@10 -- # set +x 00:18:12.604 NVMe0n1 00:18:12.604 01:39:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.604 01:39:25 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:12.604 Running I/O for 10 seconds... 00:18:24.827 00:18:24.827 Latency(us) 00:18:24.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.827 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:24.827 Verification LBA range: start 0x0 length 0x4000 00:18:24.827 NVMe0n1 : 10.07 12381.34 48.36 0.00 0.00 82374.97 15437.37 62526.20 00:18:24.827 =================================================================================================================== 00:18:24.827 Total : 12381.34 48.36 0.00 0.00 82374.97 15437.37 62526.20 00:18:24.827 0 00:18:24.827 01:39:35 -- target/queue_depth.sh@39 -- # killprocess 3779483 00:18:24.827 01:39:35 -- common/autotest_common.sh@926 -- # '[' -z 3779483 ']' 00:18:24.827 01:39:35 -- common/autotest_common.sh@930 -- # kill -0 3779483 00:18:24.827 01:39:35 -- common/autotest_common.sh@931 -- # uname 00:18:24.827 01:39:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.827 01:39:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3779483 00:18:24.827 01:39:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:24.827 01:39:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:24.827 01:39:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3779483' 00:18:24.827 killing process with pid 3779483 00:18:24.827 01:39:35 -- common/autotest_common.sh@945 -- # kill 3779483 00:18:24.827 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.827 00:18:24.827 Latency(us) 00:18:24.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.827 =================================================================================================================== 00:18:24.827 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.827 01:39:35 -- common/autotest_common.sh@950 -- # wait 3779483 00:18:24.827 01:39:35 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:24.827 01:39:35 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:24.827 01:39:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:24.827 01:39:35 -- nvmf/common.sh@116 -- # sync 00:18:24.827 01:39:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:24.827 01:39:35 -- nvmf/common.sh@119 -- # set +e 00:18:24.827 01:39:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:24.827 01:39:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:24.827 rmmod nvme_tcp 00:18:24.827 rmmod nvme_fabrics 00:18:24.827 rmmod nvme_keyring 00:18:24.827 01:39:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:24.827 01:39:36 -- nvmf/common.sh@123 -- # set -e 00:18:24.827 01:39:36 -- nvmf/common.sh@124 -- # return 0 00:18:24.827 01:39:36 -- nvmf/common.sh@477 -- # '[' -n 3779350 ']' 00:18:24.827 01:39:36 -- nvmf/common.sh@478 -- # killprocess 3779350 00:18:24.827 01:39:36 -- common/autotest_common.sh@926 -- # '[' -z 3779350 ']' 00:18:24.827 01:39:36 -- common/autotest_common.sh@930 -- # kill -0 3779350 00:18:24.827 01:39:36 -- common/autotest_common.sh@931 -- # uname 00:18:24.827 01:39:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.827 01:39:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3779350 00:18:24.827 01:39:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:24.827 01:39:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:24.827 01:39:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3779350' 00:18:24.827 killing process with pid 3779350 00:18:24.827 01:39:36 -- common/autotest_common.sh@945 -- # kill 3779350 00:18:24.827 01:39:36 -- common/autotest_common.sh@950 -- # wait 3779350 00:18:24.827 01:39:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:24.827 01:39:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:24.827 01:39:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:24.827 01:39:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.827 01:39:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:24.827 01:39:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.827 01:39:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.828 01:39:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.395 01:39:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:25.395 00:18:25.395 real 0m17.109s 00:18:25.395 user 0m24.650s 00:18:25.395 sys 0m3.055s 00:18:25.395 01:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:25.395 01:39:38 -- common/autotest_common.sh@10 -- # set +x 00:18:25.395 ************************************ 00:18:25.395 END TEST nvmf_queue_depth 00:18:25.395 ************************************ 00:18:25.395 01:39:38 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.395 01:39:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:25.395 01:39:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:25.395 01:39:38 -- common/autotest_common.sh@10 -- # set +x 00:18:25.395 ************************************ 00:18:25.395 START TEST nvmf_multipath 00:18:25.395 ************************************ 00:18:25.395 01:39:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:25.395 * Looking for test storage... 00:18:25.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:25.395 01:39:38 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.395 01:39:38 -- nvmf/common.sh@7 -- # uname -s 00:18:25.395 01:39:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.395 01:39:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.395 01:39:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.395 01:39:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.395 01:39:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.395 01:39:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.395 01:39:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.395 01:39:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.395 01:39:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.395 01:39:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.395 01:39:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.395 01:39:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:25.395 01:39:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.395 01:39:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.395 01:39:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.395 01:39:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:25.395 01:39:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.395 01:39:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.395 01:39:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.395 01:39:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.395 01:39:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.395 01:39:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.395 01:39:38 -- paths/export.sh@5 -- # export PATH 00:18:25.395 01:39:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.395 01:39:38 -- nvmf/common.sh@46 -- # : 0 00:18:25.395 01:39:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.395 01:39:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.395 01:39:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.395 01:39:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.395 01:39:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.395 01:39:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.395 01:39:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.395 01:39:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.395 01:39:38 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:25.395 01:39:38 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:25.395 01:39:38 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:25.395 01:39:38 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:25.395 01:39:38 -- target/multipath.sh@43 -- # nvmftestinit 00:18:25.395 01:39:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:25.395 01:39:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.395 01:39:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.395 01:39:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.395 01:39:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.395 01:39:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.395 01:39:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.395 01:39:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.395 01:39:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.395 01:39:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.395 01:39:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.395 01:39:38 -- common/autotest_common.sh@10 -- # set +x 00:18:27.961 01:39:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:27.961 01:39:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:27.961 01:39:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:27.961 01:39:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:27.961 01:39:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:27.961 01:39:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:27.961 01:39:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:27.961 01:39:40 -- nvmf/common.sh@294 -- # net_devs=() 00:18:27.961 01:39:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:27.961 01:39:40 -- nvmf/common.sh@295 -- # e810=() 00:18:27.961 01:39:40 -- nvmf/common.sh@295 -- # local -ga e810 00:18:27.961 01:39:40 -- nvmf/common.sh@296 -- # x722=() 00:18:27.961 01:39:40 -- nvmf/common.sh@296 -- # local -ga x722 00:18:27.961 01:39:40 -- nvmf/common.sh@297 -- # mlx=() 00:18:27.961 01:39:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:27.961 01:39:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:27.961 01:39:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:27.961 01:39:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:27.961 01:39:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.961 01:39:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:27.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:27.961 01:39:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:27.961 01:39:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:27.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:27.961 01:39:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.961 01:39:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.961 01:39:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.961 01:39:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:27.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:27.961 01:39:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.961 01:39:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:27.961 01:39:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:27.961 01:39:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:27.961 01:39:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:27.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:27.961 01:39:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:27.961 01:39:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:27.961 01:39:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:27.961 01:39:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:27.961 01:39:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:27.961 01:39:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:27.961 01:39:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:27.961 01:39:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:27.961 01:39:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:27.961 01:39:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:27.961 01:39:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:27.961 01:39:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:27.961 01:39:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:27.961 01:39:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:27.961 01:39:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:27.961 01:39:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:27.961 01:39:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:27.961 01:39:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:27.961 01:39:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:27.962 01:39:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:27.962 01:39:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:27.962 01:39:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:27.962 01:39:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:27.962 01:39:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:27.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:27.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:18:27.962 00:18:27.962 --- 10.0.0.2 ping statistics --- 00:18:27.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.962 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:18:27.962 01:39:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:27.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:27.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:18:27.962 00:18:27.962 --- 10.0.0.1 ping statistics --- 00:18:27.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:27.962 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:27.962 01:39:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:27.962 01:39:40 -- nvmf/common.sh@410 -- # return 0 00:18:27.962 01:39:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:27.962 01:39:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:27.962 01:39:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:27.962 01:39:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:27.962 01:39:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:27.962 01:39:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:27.962 01:39:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:27.962 01:39:40 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:27.962 01:39:40 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:27.962 only one NIC for nvmf test 00:18:27.962 01:39:40 -- target/multipath.sh@47 -- # nvmftestfini 00:18:27.962 01:39:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:27.962 01:39:40 -- nvmf/common.sh@116 -- # sync 00:18:27.962 01:39:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:27.962 01:39:40 -- nvmf/common.sh@119 -- # set +e 00:18:27.962 01:39:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:27.962 01:39:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:27.962 rmmod nvme_tcp 00:18:27.962 rmmod nvme_fabrics 00:18:27.962 rmmod nvme_keyring 00:18:27.962 01:39:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:27.962 01:39:40 -- nvmf/common.sh@123 -- # set -e 00:18:27.962 01:39:40 -- nvmf/common.sh@124 -- # return 0 00:18:27.962 01:39:40 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:27.962 01:39:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:27.962 01:39:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:27.962 01:39:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:27.962 01:39:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:27.962 01:39:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:27.962 01:39:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.962 01:39:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:27.962 01:39:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.869 01:39:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:29.869 01:39:42 -- target/multipath.sh@48 -- # exit 0 00:18:29.869 01:39:42 -- target/multipath.sh@1 -- # nvmftestfini 00:18:29.869 01:39:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:29.869 01:39:42 -- nvmf/common.sh@116 -- # sync 00:18:29.869 01:39:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@119 -- # set +e 00:18:29.869 01:39:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:29.869 01:39:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:29.869 01:39:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:29.869 01:39:42 -- nvmf/common.sh@123 -- # set -e 00:18:29.869 01:39:42 -- nvmf/common.sh@124 -- # return 0 00:18:29.869 01:39:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:29.869 01:39:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:29.869 01:39:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.869 01:39:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:29.869 01:39:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.869 01:39:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.869 01:39:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.869 01:39:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:29.869 00:18:29.869 real 0m4.445s 00:18:29.869 user 0m0.835s 00:18:29.869 sys 0m1.608s 00:18:29.869 01:39:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.869 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:18:29.869 ************************************ 00:18:29.869 END TEST nvmf_multipath 00:18:29.869 ************************************ 00:18:29.869 01:39:42 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:29.869 01:39:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:29.869 01:39:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:29.869 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:18:29.869 ************************************ 00:18:29.869 START TEST nvmf_zcopy 00:18:29.869 ************************************ 00:18:29.869 01:39:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:29.869 * Looking for test storage... 00:18:29.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:29.869 01:39:42 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:29.869 01:39:42 -- nvmf/common.sh@7 -- # uname -s 00:18:29.869 01:39:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:29.869 01:39:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:29.869 01:39:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:29.869 01:39:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:29.869 01:39:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:29.869 01:39:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:29.869 01:39:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:29.869 01:39:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:29.869 01:39:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:29.869 01:39:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:29.869 01:39:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.869 01:39:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:29.869 01:39:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:29.869 01:39:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:29.869 01:39:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:29.869 01:39:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:29.869 01:39:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:29.869 01:39:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:29.869 01:39:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:29.869 01:39:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.869 01:39:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.869 01:39:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.869 01:39:42 -- paths/export.sh@5 -- # export PATH 00:18:29.869 01:39:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:29.869 01:39:42 -- nvmf/common.sh@46 -- # : 0 00:18:29.869 01:39:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:29.869 01:39:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:29.869 01:39:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:29.869 01:39:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:29.869 01:39:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:29.869 01:39:42 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:29.869 01:39:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:29.869 01:39:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:29.869 01:39:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:29.869 01:39:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:29.869 01:39:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:29.869 01:39:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.869 01:39:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.870 01:39:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:29.870 01:39:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:29.870 01:39:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:29.870 01:39:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:29.870 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:18:31.785 01:39:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.785 01:39:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:31.785 01:39:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:31.785 01:39:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:31.785 01:39:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:31.785 01:39:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:31.785 01:39:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:31.785 01:39:44 -- nvmf/common.sh@294 -- # net_devs=() 00:18:31.785 01:39:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:31.785 01:39:44 -- nvmf/common.sh@295 -- # e810=() 00:18:31.785 01:39:44 -- nvmf/common.sh@295 -- # local -ga e810 00:18:31.785 01:39:44 -- nvmf/common.sh@296 -- # x722=() 00:18:31.785 01:39:44 -- nvmf/common.sh@296 -- # local -ga x722 00:18:31.785 01:39:44 -- nvmf/common.sh@297 -- # mlx=() 00:18:31.786 01:39:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:31.786 01:39:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.786 01:39:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:31.786 01:39:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:31.786 01:39:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.786 01:39:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:31.786 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:31.786 01:39:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.786 01:39:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:31.786 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:31.786 01:39:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.786 01:39:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.786 01:39:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.786 01:39:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:31.786 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:31.786 01:39:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.786 01:39:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.786 01:39:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.786 01:39:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.786 01:39:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:31.786 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:31.786 01:39:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.786 01:39:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:31.786 01:39:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:31.786 01:39:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:31.786 01:39:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.786 01:39:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:31.786 01:39:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:31.786 01:39:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:31.786 01:39:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:31.786 01:39:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:31.786 01:39:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:31.786 01:39:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:31.786 01:39:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.786 01:39:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:31.786 01:39:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:31.786 01:39:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:31.786 01:39:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.046 01:39:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.046 01:39:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.046 01:39:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:32.046 01:39:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.046 01:39:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.046 01:39:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.046 01:39:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:18:32.046 00:18:32.046 --- 10.0.0.2 ping statistics --- 00:18:32.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.046 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:18:32.046 01:39:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:18:32.046 00:18:32.046 --- 10.0.0.1 ping statistics --- 00:18:32.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.046 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:18:32.046 01:39:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.046 01:39:44 -- nvmf/common.sh@410 -- # return 0 00:18:32.046 01:39:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.046 01:39:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.046 01:39:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.046 01:39:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.046 01:39:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.046 01:39:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.046 01:39:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.046 01:39:44 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:32.047 01:39:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.047 01:39:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.047 01:39:44 -- common/autotest_common.sh@10 -- # set +x 00:18:32.047 01:39:44 -- nvmf/common.sh@469 -- # nvmfpid=3784802 00:18:32.047 01:39:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.047 01:39:44 -- nvmf/common.sh@470 -- # waitforlisten 3784802 00:18:32.047 01:39:44 -- common/autotest_common.sh@819 -- # '[' -z 3784802 ']' 00:18:32.047 01:39:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.047 01:39:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.047 01:39:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.047 01:39:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.047 01:39:44 -- common/autotest_common.sh@10 -- # set +x 00:18:32.047 [2024-07-23 01:39:45.047133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:32.047 [2024-07-23 01:39:45.047221] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.047 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.047 [2024-07-23 01:39:45.113527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.306 [2024-07-23 01:39:45.200129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.306 [2024-07-23 01:39:45.200289] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.307 [2024-07-23 01:39:45.200306] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.307 [2024-07-23 01:39:45.200318] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.307 [2024-07-23 01:39:45.200348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.247 01:39:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.247 01:39:46 -- common/autotest_common.sh@852 -- # return 0 00:18:33.247 01:39:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:33.247 01:39:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 01:39:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.247 01:39:46 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:33.247 01:39:46 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 [2024-07-23 01:39:46.064145] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 [2024-07-23 01:39:46.080301] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 malloc0 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:33.247 01:39:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:33.247 01:39:46 -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 01:39:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:33.247 01:39:46 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:33.247 01:39:46 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:33.247 01:39:46 -- nvmf/common.sh@520 -- # config=() 00:18:33.247 01:39:46 -- nvmf/common.sh@520 -- # local subsystem config 00:18:33.247 01:39:46 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:33.247 01:39:46 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:33.247 { 00:18:33.247 "params": { 00:18:33.247 "name": "Nvme$subsystem", 00:18:33.247 "trtype": "$TEST_TRANSPORT", 00:18:33.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.247 "adrfam": "ipv4", 00:18:33.247 "trsvcid": "$NVMF_PORT", 00:18:33.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.247 "hdgst": ${hdgst:-false}, 00:18:33.247 "ddgst": ${ddgst:-false} 00:18:33.247 }, 00:18:33.247 "method": "bdev_nvme_attach_controller" 00:18:33.247 } 00:18:33.247 EOF 00:18:33.247 )") 00:18:33.247 01:39:46 -- nvmf/common.sh@542 -- # cat 00:18:33.247 01:39:46 -- nvmf/common.sh@544 -- # jq . 00:18:33.247 01:39:46 -- nvmf/common.sh@545 -- # IFS=, 00:18:33.247 01:39:46 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:33.247 "params": { 00:18:33.247 "name": "Nvme1", 00:18:33.247 "trtype": "tcp", 00:18:33.247 "traddr": "10.0.0.2", 00:18:33.247 "adrfam": "ipv4", 00:18:33.247 "trsvcid": "4420", 00:18:33.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.247 "hdgst": false, 00:18:33.247 "ddgst": false 00:18:33.247 }, 00:18:33.247 "method": "bdev_nvme_attach_controller" 00:18:33.247 }' 00:18:33.247 [2024-07-23 01:39:46.155257] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:33.247 [2024-07-23 01:39:46.155343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3784958 ] 00:18:33.247 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.247 [2024-07-23 01:39:46.214814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.247 [2024-07-23 01:39:46.305489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.508 Running I/O for 10 seconds... 00:18:43.501 00:18:43.501 Latency(us) 00:18:43.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.501 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:43.501 Verification LBA range: start 0x0 length 0x1000 00:18:43.501 Nvme1n1 : 10.02 7525.25 58.79 0.00 0.00 16972.47 1686.95 25631.86 00:18:43.501 =================================================================================================================== 00:18:43.501 Total : 7525.25 58.79 0.00 0.00 16972.47 1686.95 25631.86 00:18:43.760 01:39:56 -- target/zcopy.sh@39 -- # perfpid=3786192 00:18:43.760 01:39:56 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:43.760 01:39:56 -- common/autotest_common.sh@10 -- # set +x 00:18:43.760 01:39:56 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:43.760 01:39:56 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:43.760 01:39:56 -- nvmf/common.sh@520 -- # config=() 00:18:43.760 01:39:56 -- nvmf/common.sh@520 -- # local subsystem config 00:18:43.760 01:39:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:43.760 01:39:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:43.760 { 00:18:43.760 "params": { 00:18:43.760 "name": "Nvme$subsystem", 00:18:43.760 "trtype": "$TEST_TRANSPORT", 00:18:43.760 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:43.761 "adrfam": "ipv4", 00:18:43.761 "trsvcid": "$NVMF_PORT", 00:18:43.761 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:43.761 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:43.761 "hdgst": ${hdgst:-false}, 00:18:43.761 "ddgst": ${ddgst:-false} 00:18:43.761 }, 00:18:43.761 "method": "bdev_nvme_attach_controller" 00:18:43.761 } 00:18:43.761 EOF 00:18:43.761 )") 00:18:43.761 01:39:56 -- nvmf/common.sh@542 -- # cat 00:18:43.761 01:39:56 -- nvmf/common.sh@544 -- # jq . 00:18:43.761 [2024-07-23 01:39:56.785966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.786027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 01:39:56 -- nvmf/common.sh@545 -- # IFS=, 00:18:43.761 01:39:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:43.761 "params": { 00:18:43.761 "name": "Nvme1", 00:18:43.761 "trtype": "tcp", 00:18:43.761 "traddr": "10.0.0.2", 00:18:43.761 "adrfam": "ipv4", 00:18:43.761 "trsvcid": "4420", 00:18:43.761 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.761 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.761 "hdgst": false, 00:18:43.761 "ddgst": false 00:18:43.761 }, 00:18:43.761 "method": "bdev_nvme_attach_controller" 00:18:43.761 }' 00:18:43.761 [2024-07-23 01:39:56.793913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.793940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.801932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.801958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.809952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.809976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.817961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.817986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.821750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:43.761 [2024-07-23 01:39:56.821819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786192 ] 00:18:43.761 [2024-07-23 01:39:56.825981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.826004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.834004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.834027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.842024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.842048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 [2024-07-23 01:39:56.850045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.850068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:43.761 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.761 [2024-07-23 01:39:56.858068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:43.761 [2024-07-23 01:39:56.858091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.866089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.866113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.874112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.874136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.882133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.882156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.887504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.022 [2024-07-23 01:39:56.890157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.890181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.898215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.898253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.906215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.906240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.914222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.914245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.922243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.922267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.930263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.930287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.938313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.938350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.946338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.946374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.954335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.954359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.962357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.962381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.970378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.970402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.975902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.022 [2024-07-23 01:39:56.978399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.978423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.986421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.986444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:56.994474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.022 [2024-07-23 01:39:56.994510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.022 [2024-07-23 01:39:57.002502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.002545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.010529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.010571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.018538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.018578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.026559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.026611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.034596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.034645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.042588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.042643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.050624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.050655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.058669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.058710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.066693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.066733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.074680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.074705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.082697] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.082721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.090746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.090772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.098755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.098779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.106781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.106804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.023 [2024-07-23 01:39:57.114802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.023 [2024-07-23 01:39:57.114827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.122989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.123015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.131011] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.131035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.139039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.139063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.147060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.147084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.155084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.155109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.163143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.163169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.171133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.171159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.179157] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.179182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.187180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.187204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.195199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.195223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.203220] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.203244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.211248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.211274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.219265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.219290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.227288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.227312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.235311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.235335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.243335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.243359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.251362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.251388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.259386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.259412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.267411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.267436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.275436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.275461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.283459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.283484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.291486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.291510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.299510] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.299535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.307534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.307558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.315562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.315591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 Running I/O for 5 seconds... 00:18:44.283 [2024-07-23 01:39:57.323583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.323608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.336023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.336051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.347082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.347111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.356252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.356279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.367717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.367747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.283 [2024-07-23 01:39:57.380344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.283 [2024-07-23 01:39:57.380372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.390108] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.390153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.401720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.401749] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.414309] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.414337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.423871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.423899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.435083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.435110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.445107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.445133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.455763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.455791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.468575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.468628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.480218] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.480245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.489030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.489057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.500752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.500779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.512707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.512734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.521934] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.521984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.533187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.533215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.543450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.543476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.553712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.553740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.564152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.564179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.577043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.577070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.586629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.586672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.597752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.597781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.608047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.608075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.618707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.618735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.631179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.631206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.544 [2024-07-23 01:39:57.639889] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.544 [2024-07-23 01:39:57.639932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.652474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.652504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.661869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.661911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.673054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.673082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.683865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.683893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.695687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.695715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.704576] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.704605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.715504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.715546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.725747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.725782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.736159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.736186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.746882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.746929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.757816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.757843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.769750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.769779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.778921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.778948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.791783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.791811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.801652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.801680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.812308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.812335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.824906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.824934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.833415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.833459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.846431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.846458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.856575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.856605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.866716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.866743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.877235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.877262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.887388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.887415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.806 [2024-07-23 01:39:57.897703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.806 [2024-07-23 01:39:57.897731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.908541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.908570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.919026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.919053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.929314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.929352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.941774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.941802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.952433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.952460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.962456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.962483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.972927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.972954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.983781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.983809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:57.994268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:57.994295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.006489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.006517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.015572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.015599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.026856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.026884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.037082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.037109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.047772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.047799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.059839] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.059867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.068496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.068523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.081957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.081984] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.091642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.091669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.101985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.102012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.112548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.112575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.123047] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.123074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.133326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.133352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.143255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.143282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.153525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.153552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.066 [2024-07-23 01:39:58.163641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.066 [2024-07-23 01:39:58.163668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.173746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.173773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.184207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.184234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.194472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.194499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.204969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.204995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.215692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.215719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.226339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.226365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.236817] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.236844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.249179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.249209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.259118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.259145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.269536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.269578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.281959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.281986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.291857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.291885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.302789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.302816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.313327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.313355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.323762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.323790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.336407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.336433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.345294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.345321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.358230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.358256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.369506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.369533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.325 [2024-07-23 01:39:58.378347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.325 [2024-07-23 01:39:58.378374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.326 [2024-07-23 01:39:58.389076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.326 [2024-07-23 01:39:58.389103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.326 [2024-07-23 01:39:58.399248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.326 [2024-07-23 01:39:58.399275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.326 [2024-07-23 01:39:58.409579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.326 [2024-07-23 01:39:58.409632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.326 [2024-07-23 01:39:58.419832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.326 [2024-07-23 01:39:58.419860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.430418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.430447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.441125] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.441152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.451668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.451696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.463870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.463898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.472828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.472856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.483591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.483630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.494517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.494545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.507074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.507101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.516328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.516356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.529482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.586 [2024-07-23 01:39:58.529526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.586 [2024-07-23 01:39:58.539671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.539700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.549959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.549987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.560748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.560777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.571235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.571264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.581742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.581770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.592479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.592510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.604873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.604915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.614413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.614440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.625683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.625710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.636252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.636280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.646746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.646774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.659494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.659521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.669248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.669275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.587 [2024-07-23 01:39:58.680737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.587 [2024-07-23 01:39:58.680765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.691206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.691249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.701864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.701892] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.712368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.712396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.723032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.723059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.733773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.733800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.744747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.744775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.755292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.755319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.765871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.765898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.776306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.776332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.786702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.786730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.799115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.799142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.808450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.808492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.821442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.821470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.831678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.846 [2024-07-23 01:39:58.831706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.846 [2024-07-23 01:39:58.842667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.842695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.853189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.853215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.863654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.863682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.875851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.875879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.884750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.884779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.895968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.895995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.906355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.906381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.917026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.917053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.929568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.929609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.847 [2024-07-23 01:39:58.938439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.847 [2024-07-23 01:39:58.938472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:58.949870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:58.949898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:58.962571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:58.962598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:58.971844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:58.971872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:58.982812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:58.982840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:58.994768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:58.994804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.003819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.003847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.016521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.016548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.026282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.026309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.037394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.037422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.049877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.049906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.059880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.059922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.070933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.070961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.081339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.081366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.091479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.091507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.102971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.103014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.115828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.115856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.125432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.107 [2024-07-23 01:39:59.125458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.107 [2024-07-23 01:39:59.137054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.137085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.147558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.147597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.158330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.158357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.168813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.168840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.179338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.179365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.189742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.189770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.108 [2024-07-23 01:39:59.200316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.108 [2024-07-23 01:39:59.200344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.368 [2024-07-23 01:39:59.211176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.368 [2024-07-23 01:39:59.211204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.368 [2024-07-23 01:39:59.222027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.368 [2024-07-23 01:39:59.222054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.233279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.233306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.243836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.243864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.254586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.254640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.267509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.267536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.277126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.277153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.288384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.288412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.301038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.301065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.310373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.310400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.321604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.321659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.332200] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.332227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.344880] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.344922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.354922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.354972] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.366330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.366358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.376588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.376643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.387264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.387291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.398049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.398076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.408859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.408887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.419522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.419550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.430374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.430401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.442807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.442836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.451996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.452038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.369 [2024-07-23 01:39:59.463415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.369 [2024-07-23 01:39:59.463443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.476025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.476053] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.487296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.487324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.496960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.496987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.507868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.507910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.518536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.518563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.529407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.529435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.540211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.540240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.550953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.550981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.561626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.561672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.572587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.572627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.583210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.583240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.594122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.594149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.604888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.604939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.615869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.615899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.626487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.626514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.637443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.637470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.648040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.648067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.658216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.658243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.669203] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.669230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.679643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.679671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.690204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.690231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.702605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.702642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.711980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.712007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.629 [2024-07-23 01:39:59.722913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.629 [2024-07-23 01:39:59.722945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.733217] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.733245] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.743205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.743232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.754472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.754499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.765171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.765202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.776117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.776145] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.786532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.786559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.797291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.797318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.807113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.807140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.818399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.818430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.830454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.830481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.839832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.839860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.851171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.851199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.861893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.861921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.872403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.872431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.884827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.884855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.894201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.894228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.888 [2024-07-23 01:39:59.905647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.888 [2024-07-23 01:39:59.905675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.916182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.916209] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.926582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.926636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.936990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.937017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.947941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.947968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.960987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.961014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.970572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.970601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.889 [2024-07-23 01:39:59.981971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.889 [2024-07-23 01:39:59.981998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:39:59.992799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:39:59.992828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.004893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.004951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.018138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.018172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.028626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.028656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.039151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.039180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.051848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.051877] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.060790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.060817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.073763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.073791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.083487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.083514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.094653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.094681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.105376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.105404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.116276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.116318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.127177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.127204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.138234] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.138261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.149513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.149540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.160267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.160294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.170840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.170867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.181257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.181286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.193552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.193578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.203250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.203276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.214175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.214202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.224306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.224333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.234822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.234850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.149 [2024-07-23 01:40:00.245066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.149 [2024-07-23 01:40:00.245094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.255548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.255576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.266032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.266059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.278363] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.278389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.287827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.287855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.298829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.298856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.410 [2024-07-23 01:40:00.309209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.410 [2024-07-23 01:40:00.309236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.319580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.319607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.330046] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.330073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.340867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.340910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.351367] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.351394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.361748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.361776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.374268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.374295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.383356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.383383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.396374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.396401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.405635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.405662] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.416722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.416750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.427398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.427425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.437854] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.437882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.450097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.450123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.459741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.459770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.471098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.471142] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.481374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.481402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.492194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.492221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.411 [2024-07-23 01:40:00.504865] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.411 [2024-07-23 01:40:00.504894] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.514355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.514383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.525582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.525632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.536542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.536571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.547079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.547107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.559532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.559560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.568499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.568528] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.579876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.579912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.592297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.592324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.601989] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.602016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.613199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.613227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.625528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.625556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.634646] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.634674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.645634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.645663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.656133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.656161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.666548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.666576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.676741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.676769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.687118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.687147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.697748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.697777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.708246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.708274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.718727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.718758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.729176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.729203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.741853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.741881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.753243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.753272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.671 [2024-07-23 01:40:00.762145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.671 [2024-07-23 01:40:00.762187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.773853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.773882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.786418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.786451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.796135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.796164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.806988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.807016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.819907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.819936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.829459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.829487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.840409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.840436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.850786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.850815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.860913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.860942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.871602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.871659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.881369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.881396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.892832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.892860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.903919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.903947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.914359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.914386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.926940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.926968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.935921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.935948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.946631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.946659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.958985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.959012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.967734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.967762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.980585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.980638] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:00.990728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:00.990767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:01.001067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:01.001095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:01.011910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:01.011946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.932 [2024-07-23 01:40:01.022201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.932 [2024-07-23 01:40:01.022230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.033003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.033032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.043778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.043807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.054930] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.054958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.065671] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.065700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.076859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.076888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.087941] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.087970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.098728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.098756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.111221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.111249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.120202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.120231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.130948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.130975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.141012] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.141039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.150968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.150996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.161539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.161567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.173552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.173580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.182183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.182210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.193173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.190 [2024-07-23 01:40:01.193208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.190 [2024-07-23 01:40:01.203554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.203582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.214460] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.214489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.226610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.226646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.235462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.235489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.248241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.248269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.258018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.258046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.268282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.268310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.278370] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.278397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.191 [2024-07-23 01:40:01.288845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.191 [2024-07-23 01:40:01.288881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.299421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.299449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.309806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.309834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.322141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.322169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.331158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.331186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.341940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.341967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.352308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.352335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.362720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.362748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.373335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.373362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.385691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.385719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.394408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.394436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.405308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.405335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.414923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.414950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.425331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.425358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.438768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.438810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.447975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.448002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.458532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.458561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.468979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.469006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.479140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.479168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.489335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.489362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.499554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.499582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.451 [2024-07-23 01:40:01.509777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.451 [2024-07-23 01:40:01.509806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.452 [2024-07-23 01:40:01.524224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.452 [2024-07-23 01:40:01.524255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.452 [2024-07-23 01:40:01.533264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.452 [2024-07-23 01:40:01.533291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.452 [2024-07-23 01:40:01.544260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.452 [2024-07-23 01:40:01.544287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.554529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.554558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.564804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.564832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.575243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.575273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.586344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.586374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.599389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.599416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.609586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.609621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.619635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.619673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.630038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.630065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.640534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.640562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.650580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.650608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.660724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.660751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.671316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.671344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.681816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.681843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.694871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.694916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.704206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.704232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.715446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.715473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.726022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.726063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.736902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.736946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.747740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.747768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.758207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.758234] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.770982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.771010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.780513] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.780555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.791582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.791633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.711 [2024-07-23 01:40:01.802541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.711 [2024-07-23 01:40:01.802569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.813050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.813080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.823408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.823447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.835393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.835431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.844971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.844998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.856438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.856466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.866228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.866255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.877522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.877550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.888281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.888308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.898922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.898950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.911133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.911161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.920339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.971 [2024-07-23 01:40:01.920366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.971 [2024-07-23 01:40:01.931526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.931553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.944201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.944229] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.953970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.953998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.965308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.965336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.978057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.978085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.987204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.987231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:01.998136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:01.998167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.008821] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.008849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.019397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.019424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.029798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.029827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.040008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.040037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.050408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.050435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.972 [2024-07-23 01:40:02.060864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.972 [2024-07-23 01:40:02.060906] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.071107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.071136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.081729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.081757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.092328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.092356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.105065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.105092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.114730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.114759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.126020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.126048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.136148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.136174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.146699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.146728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.157178] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.157206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.168032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.168059] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.178573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.178603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.189354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.189381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.199962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.199997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.210502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.210529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.223327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.223355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.232795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.232823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.243991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.244018] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.254264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.254292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.264748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.264776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.275314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.275341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.287913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.287940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.299456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.299483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.308640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.308668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.320347] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.240 [2024-07-23 01:40:02.320375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.240 [2024-07-23 01:40:02.330672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.241 [2024-07-23 01:40:02.330701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.340035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.340063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 00:18:49.539 Latency(us) 00:18:49.539 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.539 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:49.539 Nvme1n1 : 5.01 12046.75 94.12 0.00 0.00 10611.88 4344.79 21748.24 00:18:49.539 =================================================================================================================== 00:18:49.539 Total : 12046.75 94.12 0.00 0.00 10611.88 4344.79 21748.24 00:18:49.539 [2024-07-23 01:40:02.345262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.345292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.353282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.353310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.361316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.361362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.369377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.369431] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.377397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.377450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.385423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.385476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.393446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.393497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.401464] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.401515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.409475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.409523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.417504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.417555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.425531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.425583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.433553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.433605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.441569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.441630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.449585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.449650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.457610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.457684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.465665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.465713] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.473678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.473731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.481636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.481688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.489668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.489701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.497757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.497813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.505767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.505819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.513750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.513807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.521745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.521773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.529814] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.529868] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.537846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.537901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.539 [2024-07-23 01:40:02.545851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.539 [2024-07-23 01:40:02.545895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.540 [2024-07-23 01:40:02.553828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.540 [2024-07-23 01:40:02.553854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.540 [2024-07-23 01:40:02.561851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.540 [2024-07-23 01:40:02.561876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.540 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3786192) - No such process 00:18:49.540 01:40:02 -- target/zcopy.sh@49 -- # wait 3786192 00:18:49.540 01:40:02 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:49.540 01:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.540 01:40:02 -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 01:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.540 01:40:02 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:49.540 01:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.540 01:40:02 -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 delay0 00:18:49.540 01:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.540 01:40:02 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:49.540 01:40:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:49.540 01:40:02 -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 01:40:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:49.540 01:40:02 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:49.540 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.797 [2024-07-23 01:40:02.643387] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:57.921 Initializing NVMe Controllers 00:18:57.921 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:57.921 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:57.921 Initialization complete. Launching workers. 00:18:57.921 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 272, failed: 12028 00:18:57.921 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12200, failed to submit 100 00:18:57.921 success 12079, unsuccess 121, failed 0 00:18:57.921 01:40:09 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:57.921 01:40:09 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:57.921 01:40:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.921 01:40:09 -- nvmf/common.sh@116 -- # sync 00:18:57.921 01:40:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.921 01:40:09 -- nvmf/common.sh@119 -- # set +e 00:18:57.921 01:40:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.921 01:40:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.921 rmmod nvme_tcp 00:18:57.921 rmmod nvme_fabrics 00:18:57.921 rmmod nvme_keyring 00:18:57.921 01:40:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.921 01:40:09 -- nvmf/common.sh@123 -- # set -e 00:18:57.921 01:40:09 -- nvmf/common.sh@124 -- # return 0 00:18:57.921 01:40:09 -- nvmf/common.sh@477 -- # '[' -n 3784802 ']' 00:18:57.921 01:40:09 -- nvmf/common.sh@478 -- # killprocess 3784802 00:18:57.921 01:40:09 -- common/autotest_common.sh@926 -- # '[' -z 3784802 ']' 00:18:57.921 01:40:09 -- common/autotest_common.sh@930 -- # kill -0 3784802 00:18:57.921 01:40:09 -- common/autotest_common.sh@931 -- # uname 00:18:57.921 01:40:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.921 01:40:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3784802 00:18:57.921 01:40:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:57.921 01:40:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:57.921 01:40:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3784802' 00:18:57.921 killing process with pid 3784802 00:18:57.921 01:40:09 -- common/autotest_common.sh@945 -- # kill 3784802 00:18:57.921 01:40:09 -- common/autotest_common.sh@950 -- # wait 3784802 00:18:57.921 01:40:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.921 01:40:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:57.921 01:40:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:57.921 01:40:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.921 01:40:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:57.921 01:40:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.921 01:40:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.921 01:40:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.301 01:40:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:59.301 00:18:59.301 real 0m29.220s 00:18:59.301 user 0m41.420s 00:18:59.301 sys 0m9.802s 00:18:59.301 01:40:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.301 01:40:12 -- common/autotest_common.sh@10 -- # set +x 00:18:59.301 ************************************ 00:18:59.301 END TEST nvmf_zcopy 00:18:59.301 ************************************ 00:18:59.302 01:40:12 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.302 01:40:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:59.302 01:40:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.302 01:40:12 -- common/autotest_common.sh@10 -- # set +x 00:18:59.302 ************************************ 00:18:59.302 START TEST nvmf_nmic 00:18:59.302 ************************************ 00:18:59.302 01:40:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.302 * Looking for test storage... 00:18:59.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.302 01:40:12 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.302 01:40:12 -- nvmf/common.sh@7 -- # uname -s 00:18:59.302 01:40:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.302 01:40:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.302 01:40:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.302 01:40:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.302 01:40:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.302 01:40:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.302 01:40:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.302 01:40:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.302 01:40:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.302 01:40:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.302 01:40:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.302 01:40:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.302 01:40:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.302 01:40:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.302 01:40:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.302 01:40:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.302 01:40:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.302 01:40:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.302 01:40:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.302 01:40:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.302 01:40:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.302 01:40:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.302 01:40:12 -- paths/export.sh@5 -- # export PATH 00:18:59.302 01:40:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.302 01:40:12 -- nvmf/common.sh@46 -- # : 0 00:18:59.302 01:40:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:59.302 01:40:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:59.302 01:40:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:59.302 01:40:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.302 01:40:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.302 01:40:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:59.302 01:40:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:59.302 01:40:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:59.302 01:40:12 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.302 01:40:12 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.302 01:40:12 -- target/nmic.sh@14 -- # nvmftestinit 00:18:59.302 01:40:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:59.302 01:40:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.302 01:40:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:59.302 01:40:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:59.302 01:40:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:59.302 01:40:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.302 01:40:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.302 01:40:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.302 01:40:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:59.302 01:40:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:59.302 01:40:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:59.302 01:40:12 -- common/autotest_common.sh@10 -- # set +x 00:19:01.211 01:40:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:01.211 01:40:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:01.211 01:40:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:01.211 01:40:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:01.211 01:40:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:01.211 01:40:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:01.211 01:40:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:01.211 01:40:14 -- nvmf/common.sh@294 -- # net_devs=() 00:19:01.211 01:40:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:01.211 01:40:14 -- nvmf/common.sh@295 -- # e810=() 00:19:01.211 01:40:14 -- nvmf/common.sh@295 -- # local -ga e810 00:19:01.211 01:40:14 -- nvmf/common.sh@296 -- # x722=() 00:19:01.211 01:40:14 -- nvmf/common.sh@296 -- # local -ga x722 00:19:01.211 01:40:14 -- nvmf/common.sh@297 -- # mlx=() 00:19:01.211 01:40:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:01.211 01:40:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.211 01:40:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.211 01:40:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.211 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.211 01:40:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.211 01:40:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.211 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.211 01:40:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.211 01:40:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.211 01:40:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.211 01:40:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.211 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.211 01:40:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.211 01:40:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.211 01:40:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.211 01:40:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.211 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.211 01:40:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:01.211 01:40:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:01.211 01:40:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.211 01:40:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.211 01:40:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:01.211 01:40:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.211 01:40:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.211 01:40:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:01.211 01:40:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.211 01:40:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.211 01:40:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:01.211 01:40:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:01.211 01:40:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.211 01:40:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.211 01:40:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.211 01:40:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.211 01:40:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:01.211 01:40:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.211 01:40:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.211 01:40:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.211 01:40:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:01.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:01.211 00:19:01.211 --- 10.0.0.2 ping statistics --- 00:19:01.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.211 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:01.211 01:40:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:01.211 00:19:01.211 --- 10.0.0.1 ping statistics --- 00:19:01.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.211 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:01.211 01:40:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.211 01:40:14 -- nvmf/common.sh@410 -- # return 0 00:19:01.211 01:40:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:01.211 01:40:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.211 01:40:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:01.211 01:40:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.211 01:40:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:01.211 01:40:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:01.211 01:40:14 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:01.211 01:40:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:01.211 01:40:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:01.211 01:40:14 -- common/autotest_common.sh@10 -- # set +x 00:19:01.211 01:40:14 -- nvmf/common.sh@469 -- # nvmfpid=3789648 00:19:01.211 01:40:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.211 01:40:14 -- nvmf/common.sh@470 -- # waitforlisten 3789648 00:19:01.211 01:40:14 -- common/autotest_common.sh@819 -- # '[' -z 3789648 ']' 00:19:01.211 01:40:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.211 01:40:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:01.211 01:40:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.211 01:40:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:01.211 01:40:14 -- common/autotest_common.sh@10 -- # set +x 00:19:01.472 [2024-07-23 01:40:14.327964] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:01.472 [2024-07-23 01:40:14.328064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.472 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.472 [2024-07-23 01:40:14.398737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.472 [2024-07-23 01:40:14.494040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.472 [2024-07-23 01:40:14.494226] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.472 [2024-07-23 01:40:14.494247] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.472 [2024-07-23 01:40:14.494267] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.472 [2024-07-23 01:40:14.494370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.472 [2024-07-23 01:40:14.494427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.472 [2024-07-23 01:40:14.494487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.472 [2024-07-23 01:40:14.494489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.411 01:40:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.411 01:40:15 -- common/autotest_common.sh@852 -- # return 0 00:19:02.411 01:40:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.411 01:40:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 01:40:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.411 01:40:15 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 [2024-07-23 01:40:15.287196] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 Malloc0 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 [2024-07-23 01:40:15.340468] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:02.411 test case1: single bdev can't be used in multiple subsystems 00:19:02.411 01:40:15 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@28 -- # nmic_status=0 00:19:02.411 01:40:15 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 [2024-07-23 01:40:15.364344] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:02.411 [2024-07-23 01:40:15.364373] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:02.411 [2024-07-23 01:40:15.364388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.411 request: 00:19:02.411 { 00:19:02.411 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.411 "namespace": { 00:19:02.411 "bdev_name": "Malloc0" 00:19:02.411 }, 00:19:02.411 "method": "nvmf_subsystem_add_ns", 00:19:02.411 "req_id": 1 00:19:02.411 } 00:19:02.411 Got JSON-RPC error response 00:19:02.411 response: 00:19:02.411 { 00:19:02.411 "code": -32602, 00:19:02.411 "message": "Invalid parameters" 00:19:02.411 } 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@29 -- # nmic_status=1 00:19:02.411 01:40:15 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:02.411 01:40:15 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:02.411 Adding namespace failed - expected result. 00:19:02.411 01:40:15 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:02.411 test case2: host connect to nvmf target in multiple paths 00:19:02.411 01:40:15 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:02.411 01:40:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.411 01:40:15 -- common/autotest_common.sh@10 -- # set +x 00:19:02.411 [2024-07-23 01:40:15.372457] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:02.411 01:40:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.411 01:40:15 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:02.980 01:40:16 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:03.919 01:40:16 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:03.919 01:40:16 -- common/autotest_common.sh@1177 -- # local i=0 00:19:03.919 01:40:16 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.919 01:40:16 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:03.919 01:40:16 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:05.833 01:40:18 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:05.833 01:40:18 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:05.833 01:40:18 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:05.833 01:40:18 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:05.833 01:40:18 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.833 01:40:18 -- common/autotest_common.sh@1187 -- # return 0 00:19:05.833 01:40:18 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:05.833 [global] 00:19:05.833 thread=1 00:19:05.833 invalidate=1 00:19:05.833 rw=write 00:19:05.833 time_based=1 00:19:05.833 runtime=1 00:19:05.833 ioengine=libaio 00:19:05.833 direct=1 00:19:05.833 bs=4096 00:19:05.833 iodepth=1 00:19:05.833 norandommap=0 00:19:05.833 numjobs=1 00:19:05.833 00:19:05.833 verify_dump=1 00:19:05.833 verify_backlog=512 00:19:05.833 verify_state_save=0 00:19:05.833 do_verify=1 00:19:05.833 verify=crc32c-intel 00:19:05.833 [job0] 00:19:05.833 filename=/dev/nvme0n1 00:19:05.833 Could not set queue depth (nvme0n1) 00:19:06.095 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.095 fio-3.35 00:19:06.095 Starting 1 thread 00:19:07.028 00:19:07.028 job0: (groupid=0, jobs=1): err= 0: pid=3790349: Tue Jul 23 01:40:20 2024 00:19:07.028 read: IOPS=1536, BW=6144KiB/s (6291kB/s)(6144KiB/1000msec) 00:19:07.028 slat (nsec): min=5484, max=60332, avg=13769.98, stdev=5388.12 00:19:07.028 clat (usec): min=270, max=606, avg=339.73, stdev=35.42 00:19:07.028 lat (usec): min=276, max=622, avg=353.50, stdev=37.20 00:19:07.028 clat percentiles (usec): 00:19:07.028 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:19:07.028 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 343], 00:19:07.028 | 70.00th=[ 347], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 371], 00:19:07.028 | 99.00th=[ 506], 99.50th=[ 562], 99.90th=[ 603], 99.95th=[ 611], 00:19:07.028 | 99.99th=[ 611] 00:19:07.028 write: IOPS=1818, BW=7272KiB/s (7447kB/s)(7272KiB/1000msec); 0 zone resets 00:19:07.028 slat (nsec): min=6722, max=65474, avg=16411.81, stdev=7821.94 00:19:07.028 clat (usec): min=175, max=2429, avg=226.38, stdev=57.96 00:19:07.028 lat (usec): min=182, max=2477, avg=242.79, stdev=60.64 00:19:07.028 clat percentiles (usec): 00:19:07.028 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 206], 00:19:07.028 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:19:07.028 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 265], 00:19:07.028 | 99.00th=[ 314], 99.50th=[ 347], 99.90th=[ 486], 99.95th=[ 2442], 00:19:07.029 | 99.99th=[ 2442] 00:19:07.029 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:19:07.029 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:07.029 lat (usec) : 250=48.81%, 500=50.66%, 750=0.51% 00:19:07.029 lat (msec) : 4=0.03% 00:19:07.029 cpu : usr=4.60%, sys=6.50%, ctx=3355, majf=0, minf=2 00:19:07.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.029 issued rwts: total=1536,1818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.029 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.029 00:19:07.029 Run status group 0 (all jobs): 00:19:07.029 READ: bw=6144KiB/s (6291kB/s), 6144KiB/s-6144KiB/s (6291kB/s-6291kB/s), io=6144KiB (6291kB), run=1000-1000msec 00:19:07.029 WRITE: bw=7272KiB/s (7447kB/s), 7272KiB/s-7272KiB/s (7447kB/s-7447kB/s), io=7272KiB (7447kB), run=1000-1000msec 00:19:07.029 00:19:07.029 Disk stats (read/write): 00:19:07.029 nvme0n1: ios=1507/1536, merge=0/0, ticks=518/294, in_queue=812, util=92.18% 00:19:07.029 01:40:20 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:07.286 01:40:20 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.286 01:40:20 -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.286 01:40:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:07.286 01:40:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.286 01:40:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:07.286 01:40:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.286 01:40:20 -- common/autotest_common.sh@1210 -- # return 0 00:19:07.286 01:40:20 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:07.286 01:40:20 -- target/nmic.sh@53 -- # nvmftestfini 00:19:07.286 01:40:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.286 01:40:20 -- nvmf/common.sh@116 -- # sync 00:19:07.286 01:40:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.286 01:40:20 -- nvmf/common.sh@119 -- # set +e 00:19:07.286 01:40:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.286 01:40:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.286 rmmod nvme_tcp 00:19:07.286 rmmod nvme_fabrics 00:19:07.286 rmmod nvme_keyring 00:19:07.286 01:40:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.286 01:40:20 -- nvmf/common.sh@123 -- # set -e 00:19:07.286 01:40:20 -- nvmf/common.sh@124 -- # return 0 00:19:07.286 01:40:20 -- nvmf/common.sh@477 -- # '[' -n 3789648 ']' 00:19:07.286 01:40:20 -- nvmf/common.sh@478 -- # killprocess 3789648 00:19:07.286 01:40:20 -- common/autotest_common.sh@926 -- # '[' -z 3789648 ']' 00:19:07.287 01:40:20 -- common/autotest_common.sh@930 -- # kill -0 3789648 00:19:07.287 01:40:20 -- common/autotest_common.sh@931 -- # uname 00:19:07.287 01:40:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.287 01:40:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3789648 00:19:07.287 01:40:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.287 01:40:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.287 01:40:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3789648' 00:19:07.287 killing process with pid 3789648 00:19:07.287 01:40:20 -- common/autotest_common.sh@945 -- # kill 3789648 00:19:07.287 01:40:20 -- common/autotest_common.sh@950 -- # wait 3789648 00:19:07.547 01:40:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.547 01:40:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.547 01:40:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.547 01:40:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.547 01:40:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.547 01:40:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.547 01:40:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.547 01:40:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.084 01:40:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:10.085 00:19:10.085 real 0m10.487s 00:19:10.085 user 0m25.295s 00:19:10.085 sys 0m2.376s 00:19:10.085 01:40:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.085 01:40:22 -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 ************************************ 00:19:10.085 END TEST nvmf_nmic 00:19:10.085 ************************************ 00:19:10.085 01:40:22 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:10.085 01:40:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:10.085 01:40:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:10.085 01:40:22 -- common/autotest_common.sh@10 -- # set +x 00:19:10.085 ************************************ 00:19:10.085 START TEST nvmf_fio_target 00:19:10.085 ************************************ 00:19:10.085 01:40:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:10.085 * Looking for test storage... 00:19:10.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.085 01:40:22 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.085 01:40:22 -- nvmf/common.sh@7 -- # uname -s 00:19:10.085 01:40:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.085 01:40:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.085 01:40:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.085 01:40:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.085 01:40:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.085 01:40:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.085 01:40:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.085 01:40:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.085 01:40:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.085 01:40:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.085 01:40:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.085 01:40:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.085 01:40:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.085 01:40:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.085 01:40:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.085 01:40:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.085 01:40:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.085 01:40:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.085 01:40:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.085 01:40:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.085 01:40:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.085 01:40:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.085 01:40:22 -- paths/export.sh@5 -- # export PATH 00:19:10.085 01:40:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.085 01:40:22 -- nvmf/common.sh@46 -- # : 0 00:19:10.085 01:40:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.085 01:40:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.085 01:40:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.085 01:40:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.085 01:40:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.085 01:40:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.085 01:40:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.085 01:40:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.085 01:40:22 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.085 01:40:22 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.085 01:40:22 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.085 01:40:22 -- target/fio.sh@16 -- # nvmftestinit 00:19:10.085 01:40:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:10.085 01:40:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.085 01:40:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.085 01:40:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.085 01:40:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.085 01:40:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.085 01:40:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.085 01:40:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.085 01:40:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:10.085 01:40:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:10.085 01:40:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:10.085 01:40:22 -- common/autotest_common.sh@10 -- # set +x 00:19:11.987 01:40:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:11.987 01:40:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:11.987 01:40:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:11.987 01:40:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:11.987 01:40:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:11.987 01:40:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:11.987 01:40:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:11.987 01:40:24 -- nvmf/common.sh@294 -- # net_devs=() 00:19:11.987 01:40:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:11.987 01:40:24 -- nvmf/common.sh@295 -- # e810=() 00:19:11.987 01:40:24 -- nvmf/common.sh@295 -- # local -ga e810 00:19:11.987 01:40:24 -- nvmf/common.sh@296 -- # x722=() 00:19:11.987 01:40:24 -- nvmf/common.sh@296 -- # local -ga x722 00:19:11.987 01:40:24 -- nvmf/common.sh@297 -- # mlx=() 00:19:11.987 01:40:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:11.987 01:40:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.987 01:40:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:11.987 01:40:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:11.987 01:40:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:11.987 01:40:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.987 01:40:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:11.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:11.987 01:40:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:11.987 01:40:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:11.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:11.987 01:40:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:11.987 01:40:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.987 01:40:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.987 01:40:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.987 01:40:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.987 01:40:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:11.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:11.987 01:40:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.987 01:40:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:11.987 01:40:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.987 01:40:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:11.987 01:40:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.987 01:40:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:11.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:11.987 01:40:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.987 01:40:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:11.987 01:40:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:11.987 01:40:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:11.987 01:40:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:11.987 01:40:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.988 01:40:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.988 01:40:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.988 01:40:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:11.988 01:40:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.988 01:40:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.988 01:40:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:11.988 01:40:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.988 01:40:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.988 01:40:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:11.988 01:40:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:11.988 01:40:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.988 01:40:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.988 01:40:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.988 01:40:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.988 01:40:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:11.988 01:40:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.988 01:40:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.988 01:40:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.988 01:40:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:11.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:19:11.988 00:19:11.988 --- 10.0.0.2 ping statistics --- 00:19:11.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.988 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:19:11.988 01:40:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:19:11.988 00:19:11.988 --- 10.0.0.1 ping statistics --- 00:19:11.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.988 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:19:11.988 01:40:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.988 01:40:24 -- nvmf/common.sh@410 -- # return 0 00:19:11.988 01:40:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:11.988 01:40:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.988 01:40:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:11.988 01:40:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:11.988 01:40:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.988 01:40:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:11.988 01:40:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:11.988 01:40:24 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:11.988 01:40:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:11.988 01:40:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:11.988 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:19:11.988 01:40:24 -- nvmf/common.sh@469 -- # nvmfpid=3792521 00:19:11.988 01:40:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.988 01:40:24 -- nvmf/common.sh@470 -- # waitforlisten 3792521 00:19:11.988 01:40:24 -- common/autotest_common.sh@819 -- # '[' -z 3792521 ']' 00:19:11.988 01:40:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.988 01:40:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:11.988 01:40:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.988 01:40:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:11.988 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:19:11.988 [2024-07-23 01:40:24.894453] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:11.988 [2024-07-23 01:40:24.894521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.988 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.988 [2024-07-23 01:40:24.957992] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.988 [2024-07-23 01:40:25.041678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:11.988 [2024-07-23 01:40:25.041817] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.988 [2024-07-23 01:40:25.041834] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.988 [2024-07-23 01:40:25.041847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.988 [2024-07-23 01:40:25.041992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.988 [2024-07-23 01:40:25.042058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.988 [2024-07-23 01:40:25.042124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.988 [2024-07-23 01:40:25.042127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.966 01:40:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:12.966 01:40:25 -- common/autotest_common.sh@852 -- # return 0 00:19:12.966 01:40:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:12.966 01:40:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:12.966 01:40:25 -- common/autotest_common.sh@10 -- # set +x 00:19:12.966 01:40:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.966 01:40:25 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:13.224 [2024-07-23 01:40:26.080013] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.224 01:40:26 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.482 01:40:26 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:13.482 01:40:26 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.741 01:40:26 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:13.741 01:40:26 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.000 01:40:26 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:14.000 01:40:26 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.257 01:40:27 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:14.257 01:40:27 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:14.257 01:40:27 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.515 01:40:27 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:14.515 01:40:27 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.773 01:40:27 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:14.773 01:40:27 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.032 01:40:28 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:15.032 01:40:28 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:15.289 01:40:28 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:15.547 01:40:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.547 01:40:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:15.805 01:40:28 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.805 01:40:28 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:16.063 01:40:29 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.321 [2024-07-23 01:40:29.265264] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.321 01:40:29 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:16.579 01:40:29 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:16.836 01:40:29 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:17.402 01:40:30 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:17.402 01:40:30 -- common/autotest_common.sh@1177 -- # local i=0 00:19:17.402 01:40:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:17.402 01:40:30 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:17.402 01:40:30 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:17.402 01:40:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:19.929 01:40:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:19.929 01:40:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:19.929 01:40:32 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:19.929 01:40:32 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:19.929 01:40:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:19.929 01:40:32 -- common/autotest_common.sh@1187 -- # return 0 00:19:19.929 01:40:32 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:19.929 [global] 00:19:19.929 thread=1 00:19:19.929 invalidate=1 00:19:19.929 rw=write 00:19:19.929 time_based=1 00:19:19.929 runtime=1 00:19:19.929 ioengine=libaio 00:19:19.929 direct=1 00:19:19.929 bs=4096 00:19:19.929 iodepth=1 00:19:19.929 norandommap=0 00:19:19.929 numjobs=1 00:19:19.929 00:19:19.929 verify_dump=1 00:19:19.929 verify_backlog=512 00:19:19.929 verify_state_save=0 00:19:19.929 do_verify=1 00:19:19.929 verify=crc32c-intel 00:19:19.929 [job0] 00:19:19.929 filename=/dev/nvme0n1 00:19:19.929 [job1] 00:19:19.929 filename=/dev/nvme0n2 00:19:19.929 [job2] 00:19:19.929 filename=/dev/nvme0n3 00:19:19.929 [job3] 00:19:19.929 filename=/dev/nvme0n4 00:19:19.929 Could not set queue depth (nvme0n1) 00:19:19.929 Could not set queue depth (nvme0n2) 00:19:19.929 Could not set queue depth (nvme0n3) 00:19:19.929 Could not set queue depth (nvme0n4) 00:19:19.929 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.929 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.929 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.929 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:19.929 fio-3.35 00:19:19.929 Starting 4 threads 00:19:20.862 00:19:20.862 job0: (groupid=0, jobs=1): err= 0: pid=3793628: Tue Jul 23 01:40:33 2024 00:19:20.862 read: IOPS=1171, BW=4687KiB/s (4800kB/s)(4692KiB/1001msec) 00:19:20.862 slat (nsec): min=5347, max=91821, avg=23706.35, stdev=10180.22 00:19:20.862 clat (usec): min=339, max=592, avg=423.81, stdev=46.68 00:19:20.862 lat (usec): min=348, max=609, avg=447.51, stdev=49.60 00:19:20.862 clat percentiles (usec): 00:19:20.862 | 1.00th=[ 347], 5.00th=[ 359], 10.00th=[ 367], 20.00th=[ 383], 00:19:20.862 | 30.00th=[ 404], 40.00th=[ 412], 50.00th=[ 420], 60.00th=[ 429], 00:19:20.862 | 70.00th=[ 437], 80.00th=[ 449], 90.00th=[ 494], 95.00th=[ 529], 00:19:20.862 | 99.00th=[ 553], 99.50th=[ 562], 99.90th=[ 578], 99.95th=[ 594], 00:19:20.862 | 99.99th=[ 594] 00:19:20.862 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:20.862 slat (nsec): min=7427, max=61718, avg=20020.52, stdev=9705.92 00:19:20.862 clat (usec): min=204, max=598, avg=278.43, stdev=49.47 00:19:20.862 lat (usec): min=217, max=635, avg=298.45, stdev=53.44 00:19:20.862 clat percentiles (usec): 00:19:20.862 | 1.00th=[ 221], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:19:20.862 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 269], 60.00th=[ 273], 00:19:20.862 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 338], 95.00th=[ 392], 00:19:20.862 | 99.00th=[ 453], 99.50th=[ 482], 99.90th=[ 562], 99.95th=[ 603], 00:19:20.862 | 99.99th=[ 603] 00:19:20.862 bw ( KiB/s): min= 6632, max= 6632, per=40.92%, avg=6632.00, stdev= 0.00, samples=1 00:19:20.862 iops : min= 1658, max= 1658, avg=1658.00, stdev= 0.00, samples=1 00:19:20.862 lat (usec) : 250=19.64%, 500=76.67%, 750=3.69% 00:19:20.862 cpu : usr=2.80%, sys=6.60%, ctx=2710, majf=0, minf=1 00:19:20.862 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.862 issued rwts: total=1173,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.862 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.862 job1: (groupid=0, jobs=1): err= 0: pid=3793629: Tue Jul 23 01:40:33 2024 00:19:20.862 read: IOPS=22, BW=91.0KiB/s (93.2kB/s)(92.0KiB/1011msec) 00:19:20.862 slat (nsec): min=15185, max=33783, avg=23512.04, stdev=8208.08 00:19:20.863 clat (usec): min=421, max=41434, avg=37428.00, stdev=11670.91 00:19:20.863 lat (usec): min=439, max=41452, avg=37451.51, stdev=11672.27 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[ 420], 5.00th=[ 465], 10.00th=[40633], 20.00th=[41157], 00:19:20.863 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:20.863 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:20.863 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:20.863 | 99.99th=[41681] 00:19:20.863 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:19:20.863 slat (nsec): min=8351, max=60094, avg=21561.37, stdev=10737.74 00:19:20.863 clat (usec): min=183, max=543, avg=264.13, stdev=54.65 00:19:20.863 lat (usec): min=197, max=584, avg=285.69, stdev=60.15 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:19:20.863 | 30.00th=[ 223], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 265], 00:19:20.863 | 70.00th=[ 285], 80.00th=[ 322], 90.00th=[ 347], 95.00th=[ 359], 00:19:20.863 | 99.00th=[ 408], 99.50th=[ 441], 99.90th=[ 545], 99.95th=[ 545], 00:19:20.863 | 99.99th=[ 545] 00:19:20.863 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:19:20.863 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:20.863 lat (usec) : 250=50.47%, 500=45.42%, 750=0.19% 00:19:20.863 lat (msec) : 50=3.93% 00:19:20.863 cpu : usr=0.50%, sys=1.09%, ctx=536, majf=0, minf=1 00:19:20.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.863 job2: (groupid=0, jobs=1): err= 0: pid=3793630: Tue Jul 23 01:40:33 2024 00:19:20.863 read: IOPS=19, BW=79.7KiB/s (81.6kB/s)(80.0KiB/1004msec) 00:19:20.863 slat (nsec): min=15431, max=34100, avg=25080.40, stdev=8165.45 00:19:20.863 clat (usec): min=40796, max=41120, avg=40957.77, stdev=81.00 00:19:20.863 lat (usec): min=40830, max=41138, avg=40982.85, stdev=79.13 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:20.863 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:20.863 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:20.863 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:20.863 | 99.99th=[41157] 00:19:20.863 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:19:20.863 slat (usec): min=8, max=13584, avg=54.20, stdev=599.29 00:19:20.863 clat (usec): min=199, max=481, avg=298.00, stdev=71.07 00:19:20.863 lat (usec): min=221, max=13962, avg=352.20, stdev=607.01 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 233], 00:19:20.863 | 30.00th=[ 247], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 310], 00:19:20.863 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 404], 95.00th=[ 433], 00:19:20.863 | 99.00th=[ 469], 99.50th=[ 482], 99.90th=[ 482], 99.95th=[ 482], 00:19:20.863 | 99.99th=[ 482] 00:19:20.863 bw ( KiB/s): min= 4096, max= 4096, per=25.28%, avg=4096.00, stdev= 0.00, samples=1 00:19:20.863 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:20.863 lat (usec) : 250=32.33%, 500=63.91% 00:19:20.863 lat (msec) : 50=3.76% 00:19:20.863 cpu : usr=0.90%, sys=1.10%, ctx=535, majf=0, minf=2 00:19:20.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.863 job3: (groupid=0, jobs=1): err= 0: pid=3793631: Tue Jul 23 01:40:33 2024 00:19:20.863 read: IOPS=1364, BW=5459KiB/s (5590kB/s)(5464KiB/1001msec) 00:19:20.863 slat (nsec): min=6871, max=42808, avg=13828.37, stdev=5136.67 00:19:20.863 clat (usec): min=325, max=481, avg=370.42, stdev=20.78 00:19:20.863 lat (usec): min=334, max=489, avg=384.25, stdev=24.01 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[ 334], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 351], 00:19:20.863 | 30.00th=[ 359], 40.00th=[ 367], 50.00th=[ 371], 60.00th=[ 375], 00:19:20.863 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 396], 95.00th=[ 404], 00:19:20.863 | 99.00th=[ 441], 99.50th=[ 453], 99.90th=[ 461], 99.95th=[ 482], 00:19:20.863 | 99.99th=[ 482] 00:19:20.863 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:20.863 slat (nsec): min=10323, max=66111, avg=21035.21, stdev=8496.78 00:19:20.863 clat (usec): min=212, max=531, avg=279.24, stdev=52.56 00:19:20.863 lat (usec): min=223, max=549, avg=300.27, stdev=57.28 00:19:20.863 clat percentiles (usec): 00:19:20.863 | 1.00th=[ 219], 5.00th=[ 225], 10.00th=[ 229], 20.00th=[ 239], 00:19:20.863 | 30.00th=[ 249], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:19:20.863 | 70.00th=[ 289], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 396], 00:19:20.863 | 99.00th=[ 445], 99.50th=[ 457], 99.90th=[ 515], 99.95th=[ 529], 00:19:20.863 | 99.99th=[ 529] 00:19:20.863 bw ( KiB/s): min= 7064, max= 7064, per=43.59%, avg=7064.00, stdev= 0.00, samples=1 00:19:20.863 iops : min= 1766, max= 1766, avg=1766.00, stdev= 0.00, samples=1 00:19:20.863 lat (usec) : 250=16.99%, 500=82.91%, 750=0.10% 00:19:20.863 cpu : usr=4.00%, sys=6.80%, ctx=2903, majf=0, minf=1 00:19:20.863 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:20.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:20.863 issued rwts: total=1366,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:20.863 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:20.863 00:19:20.863 Run status group 0 (all jobs): 00:19:20.863 READ: bw=9.98MiB/s (10.5MB/s), 79.7KiB/s-5459KiB/s (81.6kB/s-5590kB/s), io=10.1MiB (10.6MB), run=1001-1011msec 00:19:20.863 WRITE: bw=15.8MiB/s (16.6MB/s), 2026KiB/s-6138KiB/s (2074kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1011msec 00:19:20.863 00:19:20.863 Disk stats (read/write): 00:19:20.863 nvme0n1: ios=1074/1231, merge=0/0, ticks=420/325, in_queue=745, util=86.77% 00:19:20.863 nvme0n2: ios=33/512, merge=0/0, ticks=671/125, in_queue=796, util=86.24% 00:19:20.863 nvme0n3: ios=40/512, merge=0/0, ticks=1641/129, in_queue=1770, util=97.48% 00:19:20.863 nvme0n4: ios=1047/1423, merge=0/0, ticks=1303/376, in_queue=1679, util=97.56% 00:19:20.863 01:40:33 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:20.863 [global] 00:19:20.863 thread=1 00:19:20.863 invalidate=1 00:19:20.863 rw=randwrite 00:19:20.863 time_based=1 00:19:20.863 runtime=1 00:19:20.863 ioengine=libaio 00:19:20.863 direct=1 00:19:20.863 bs=4096 00:19:20.863 iodepth=1 00:19:20.863 norandommap=0 00:19:20.863 numjobs=1 00:19:20.863 00:19:20.863 verify_dump=1 00:19:20.863 verify_backlog=512 00:19:20.863 verify_state_save=0 00:19:20.863 do_verify=1 00:19:20.863 verify=crc32c-intel 00:19:20.863 [job0] 00:19:20.863 filename=/dev/nvme0n1 00:19:20.863 [job1] 00:19:20.863 filename=/dev/nvme0n2 00:19:20.863 [job2] 00:19:20.863 filename=/dev/nvme0n3 00:19:20.863 [job3] 00:19:20.863 filename=/dev/nvme0n4 00:19:20.863 Could not set queue depth (nvme0n1) 00:19:20.863 Could not set queue depth (nvme0n2) 00:19:20.863 Could not set queue depth (nvme0n3) 00:19:20.863 Could not set queue depth (nvme0n4) 00:19:21.121 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.121 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.121 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.121 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.121 fio-3.35 00:19:21.121 Starting 4 threads 00:19:22.495 00:19:22.495 job0: (groupid=0, jobs=1): err= 0: pid=3793865: Tue Jul 23 01:40:35 2024 00:19:22.495 read: IOPS=1426, BW=5706KiB/s (5843kB/s)(5712KiB/1001msec) 00:19:22.495 slat (nsec): min=5767, max=45204, avg=13820.25, stdev=5258.46 00:19:22.495 clat (usec): min=281, max=40825, avg=392.81, stdev=1072.03 00:19:22.495 lat (usec): min=288, max=40835, avg=406.63, stdev=1071.89 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 326], 00:19:22.495 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 363], 00:19:22.495 | 70.00th=[ 375], 80.00th=[ 400], 90.00th=[ 437], 95.00th=[ 457], 00:19:22.495 | 99.00th=[ 529], 99.50th=[ 627], 99.90th=[ 930], 99.95th=[40633], 00:19:22.495 | 99.99th=[40633] 00:19:22.495 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:22.495 slat (nsec): min=6780, max=69379, avg=16069.55, stdev=7408.06 00:19:22.495 clat (usec): min=189, max=478, avg=247.80, stdev=37.14 00:19:22.495 lat (usec): min=197, max=515, avg=263.87, stdev=39.13 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 215], 20.00th=[ 225], 00:19:22.495 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 247], 00:19:22.495 | 70.00th=[ 253], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 314], 00:19:22.495 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 465], 99.95th=[ 478], 00:19:22.495 | 99.99th=[ 478] 00:19:22.495 bw ( KiB/s): min= 8192, max= 8192, per=51.30%, avg=8192.00, stdev= 0.00, samples=1 00:19:22.495 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:22.495 lat (usec) : 250=34.18%, 500=65.22%, 750=0.51%, 1000=0.07% 00:19:22.495 lat (msec) : 50=0.03% 00:19:22.495 cpu : usr=3.50%, sys=6.20%, ctx=2964, majf=0, minf=2 00:19:22.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.495 issued rwts: total=1428,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.495 job1: (groupid=0, jobs=1): err= 0: pid=3793866: Tue Jul 23 01:40:35 2024 00:19:22.495 read: IOPS=425, BW=1702KiB/s (1743kB/s)(1724KiB/1013msec) 00:19:22.495 slat (nsec): min=5271, max=70133, avg=22735.55, stdev=10986.22 00:19:22.495 clat (usec): min=319, max=41487, avg=1953.49, stdev=7417.88 00:19:22.495 lat (usec): min=349, max=41522, avg=1976.23, stdev=7419.05 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 347], 5.00th=[ 379], 10.00th=[ 400], 20.00th=[ 420], 00:19:22.495 | 30.00th=[ 457], 40.00th=[ 498], 50.00th=[ 553], 60.00th=[ 619], 00:19:22.495 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 766], 00:19:22.495 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:19:22.495 | 99.99th=[41681] 00:19:22.495 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:19:22.495 slat (nsec): min=6513, max=74817, avg=17123.08, stdev=10337.76 00:19:22.495 clat (usec): min=188, max=496, avg=287.07, stdev=66.67 00:19:22.495 lat (usec): min=195, max=538, avg=304.19, stdev=68.75 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 235], 00:19:22.495 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 265], 60.00th=[ 277], 00:19:22.495 | 70.00th=[ 306], 80.00th=[ 363], 90.00th=[ 396], 95.00th=[ 416], 00:19:22.495 | 99.00th=[ 453], 99.50th=[ 474], 99.90th=[ 498], 99.95th=[ 498], 00:19:22.495 | 99.99th=[ 498] 00:19:22.495 bw ( KiB/s): min= 4096, max= 4096, per=25.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.495 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.495 lat (usec) : 250=20.47%, 500=52.28%, 750=24.60%, 1000=1.06% 00:19:22.495 lat (msec) : 50=1.59% 00:19:22.495 cpu : usr=1.09%, sys=1.78%, ctx=944, majf=0, minf=1 00:19:22.495 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.495 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.495 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.495 issued rwts: total=431,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.495 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.495 job2: (groupid=0, jobs=1): err= 0: pid=3793867: Tue Jul 23 01:40:35 2024 00:19:22.495 read: IOPS=1322, BW=5291KiB/s (5418kB/s)(5296KiB/1001msec) 00:19:22.495 slat (nsec): min=5740, max=38565, avg=13149.08, stdev=5144.09 00:19:22.495 clat (usec): min=353, max=525, avg=409.21, stdev=24.69 00:19:22.495 lat (usec): min=362, max=534, avg=422.36, stdev=26.80 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 388], 00:19:22.495 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 408], 60.00th=[ 412], 00:19:22.495 | 70.00th=[ 420], 80.00th=[ 429], 90.00th=[ 441], 95.00th=[ 453], 00:19:22.495 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 519], 99.95th=[ 529], 00:19:22.495 | 99.99th=[ 529] 00:19:22.495 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:22.495 slat (nsec): min=7038, max=55048, avg=16735.30, stdev=8405.30 00:19:22.495 clat (usec): min=197, max=460, avg=262.13, stdev=41.74 00:19:22.495 lat (usec): min=204, max=497, avg=278.87, stdev=47.33 00:19:22.495 clat percentiles (usec): 00:19:22.495 | 1.00th=[ 202], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 225], 00:19:22.495 | 30.00th=[ 235], 40.00th=[ 245], 50.00th=[ 255], 60.00th=[ 269], 00:19:22.495 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 334], 00:19:22.496 | 99.00th=[ 408], 99.50th=[ 424], 99.90th=[ 437], 99.95th=[ 461], 00:19:22.496 | 99.99th=[ 461] 00:19:22.496 bw ( KiB/s): min= 8192, max= 8192, per=51.30%, avg=8192.00, stdev= 0.00, samples=1 00:19:22.496 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:19:22.496 lat (usec) : 250=24.23%, 500=75.63%, 750=0.14% 00:19:22.496 cpu : usr=4.10%, sys=5.20%, ctx=2860, majf=0, minf=1 00:19:22.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.496 issued rwts: total=1324,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.496 job3: (groupid=0, jobs=1): err= 0: pid=3793868: Tue Jul 23 01:40:35 2024 00:19:22.496 read: IOPS=50, BW=203KiB/s (208kB/s)(208KiB/1026msec) 00:19:22.496 slat (nsec): min=7646, max=37913, avg=14699.94, stdev=8875.94 00:19:22.496 clat (usec): min=347, max=42011, avg=16937.91, stdev=20260.83 00:19:22.496 lat (usec): min=356, max=42031, avg=16952.61, stdev=20266.15 00:19:22.496 clat percentiles (usec): 00:19:22.496 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 375], 00:19:22.496 | 30.00th=[ 388], 40.00th=[ 404], 50.00th=[ 461], 60.00th=[40633], 00:19:22.496 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:22.496 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:22.496 | 99.99th=[42206] 00:19:22.496 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:19:22.496 slat (nsec): min=6795, max=48572, avg=14098.27, stdev=7260.87 00:19:22.496 clat (usec): min=215, max=410, avg=263.85, stdev=33.36 00:19:22.496 lat (usec): min=225, max=443, avg=277.94, stdev=35.95 00:19:22.496 clat percentiles (usec): 00:19:22.496 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 243], 00:19:22.496 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:19:22.496 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 326], 00:19:22.496 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 412], 99.95th=[ 412], 00:19:22.496 | 99.99th=[ 412] 00:19:22.496 bw ( KiB/s): min= 4096, max= 4096, per=25.65%, avg=4096.00, stdev= 0.00, samples=1 00:19:22.496 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:22.496 lat (usec) : 250=41.49%, 500=54.08%, 750=0.53%, 1000=0.18% 00:19:22.496 lat (msec) : 50=3.72% 00:19:22.496 cpu : usr=0.39%, sys=0.78%, ctx=567, majf=0, minf=1 00:19:22.496 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:22.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:22.496 issued rwts: total=52,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:22.496 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:22.496 00:19:22.496 Run status group 0 (all jobs): 00:19:22.496 READ: bw=12.3MiB/s (12.9MB/s), 203KiB/s-5706KiB/s (208kB/s-5843kB/s), io=12.6MiB (13.2MB), run=1001-1026msec 00:19:22.496 WRITE: bw=15.6MiB/s (16.4MB/s), 1996KiB/s-6138KiB/s (2044kB/s-6285kB/s), io=16.0MiB (16.8MB), run=1001-1026msec 00:19:22.496 00:19:22.496 Disk stats (read/write): 00:19:22.496 nvme0n1: ios=1074/1501, merge=0/0, ticks=451/361, in_queue=812, util=87.07% 00:19:22.496 nvme0n2: ios=476/512, merge=0/0, ticks=1023/133, in_queue=1156, util=98.07% 00:19:22.496 nvme0n3: ios=1024/1467, merge=0/0, ticks=409/365, in_queue=774, util=88.91% 00:19:22.496 nvme0n4: ios=69/512, merge=0/0, ticks=1595/131, in_queue=1726, util=98.31% 00:19:22.496 01:40:35 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:22.496 [global] 00:19:22.496 thread=1 00:19:22.496 invalidate=1 00:19:22.496 rw=write 00:19:22.496 time_based=1 00:19:22.496 runtime=1 00:19:22.496 ioengine=libaio 00:19:22.496 direct=1 00:19:22.496 bs=4096 00:19:22.496 iodepth=128 00:19:22.496 norandommap=0 00:19:22.496 numjobs=1 00:19:22.496 00:19:22.496 verify_dump=1 00:19:22.496 verify_backlog=512 00:19:22.496 verify_state_save=0 00:19:22.496 do_verify=1 00:19:22.496 verify=crc32c-intel 00:19:22.496 [job0] 00:19:22.496 filename=/dev/nvme0n1 00:19:22.496 [job1] 00:19:22.496 filename=/dev/nvme0n2 00:19:22.496 [job2] 00:19:22.496 filename=/dev/nvme0n3 00:19:22.496 [job3] 00:19:22.496 filename=/dev/nvme0n4 00:19:22.496 Could not set queue depth (nvme0n1) 00:19:22.496 Could not set queue depth (nvme0n2) 00:19:22.496 Could not set queue depth (nvme0n3) 00:19:22.496 Could not set queue depth (nvme0n4) 00:19:22.496 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.496 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.496 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.496 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:22.496 fio-3.35 00:19:22.496 Starting 4 threads 00:19:23.873 00:19:23.873 job0: (groupid=0, jobs=1): err= 0: pid=3794098: Tue Jul 23 01:40:36 2024 00:19:23.873 read: IOPS=2757, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1005msec) 00:19:23.873 slat (usec): min=3, max=46075, avg=180.97, stdev=1288.74 00:19:23.873 clat (usec): min=3092, max=68858, avg=23440.63, stdev=11997.31 00:19:23.873 lat (usec): min=5536, max=68866, avg=23621.60, stdev=12055.85 00:19:23.873 clat percentiles (usec): 00:19:23.873 | 1.00th=[ 6128], 5.00th=[14091], 10.00th=[15926], 20.00th=[16712], 00:19:23.873 | 30.00th=[17171], 40.00th=[18220], 50.00th=[19530], 60.00th=[20841], 00:19:23.873 | 70.00th=[23987], 80.00th=[26870], 90.00th=[33162], 95.00th=[52691], 00:19:23.873 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:19:23.873 | 99.99th=[68682] 00:19:23.873 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:19:23.873 slat (usec): min=4, max=32242, avg=149.72, stdev=835.12 00:19:23.873 clat (usec): min=9808, max=50387, avg=18699.22, stdev=5768.96 00:19:23.873 lat (usec): min=9819, max=50411, avg=18848.94, stdev=5820.20 00:19:23.873 clat percentiles (usec): 00:19:23.873 | 1.00th=[11207], 5.00th=[11600], 10.00th=[12649], 20.00th=[13435], 00:19:23.873 | 30.00th=[14091], 40.00th=[15533], 50.00th=[18220], 60.00th=[19792], 00:19:23.873 | 70.00th=[22152], 80.00th=[23200], 90.00th=[24773], 95.00th=[30016], 00:19:23.873 | 99.00th=[36439], 99.50th=[36963], 99.90th=[37487], 99.95th=[37487], 00:19:23.873 | 99.99th=[50594] 00:19:23.873 bw ( KiB/s): min=10456, max=14120, per=20.46%, avg=12288.00, stdev=2590.84, samples=2 00:19:23.873 iops : min= 2614, max= 3530, avg=3072.00, stdev=647.71, samples=2 00:19:23.873 lat (msec) : 4=0.02%, 10=0.84%, 20=56.75%, 50=39.45%, 100=2.94% 00:19:23.873 cpu : usr=3.98%, sys=9.06%, ctx=311, majf=0, minf=11 00:19:23.873 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:23.873 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.873 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.873 issued rwts: total=2771,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.873 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.874 job1: (groupid=0, jobs=1): err= 0: pid=3794099: Tue Jul 23 01:40:36 2024 00:19:23.874 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:19:23.874 slat (usec): min=3, max=12099, avg=104.97, stdev=639.73 00:19:23.874 clat (usec): min=5003, max=46808, avg=14189.94, stdev=6120.82 00:19:23.874 lat (usec): min=5248, max=51128, avg=14294.90, stdev=6165.51 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[ 7046], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10028], 00:19:23.874 | 30.00th=[10552], 40.00th=[11600], 50.00th=[12780], 60.00th=[13435], 00:19:23.874 | 70.00th=[15008], 80.00th=[17433], 90.00th=[20579], 95.00th=[24511], 00:19:23.874 | 99.00th=[40633], 99.50th=[45351], 99.90th=[46924], 99.95th=[46924], 00:19:23.874 | 99.99th=[46924] 00:19:23.874 write: IOPS=4807, BW=18.8MiB/s (19.7MB/s)(19.0MiB/1013msec); 0 zone resets 00:19:23.874 slat (usec): min=3, max=10892, avg=93.66, stdev=581.40 00:19:23.874 clat (usec): min=2570, max=45167, avg=12860.23, stdev=7503.62 00:19:23.874 lat (usec): min=2591, max=45189, avg=12953.89, stdev=7558.57 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[ 3916], 5.00th=[ 5538], 10.00th=[ 6325], 20.00th=[ 7963], 00:19:23.874 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[10814], 60.00th=[11863], 00:19:23.874 | 70.00th=[13304], 80.00th=[15139], 90.00th=[21627], 95.00th=[30278], 00:19:23.874 | 99.00th=[40633], 99.50th=[41157], 99.90th=[45351], 99.95th=[45351], 00:19:23.874 | 99.99th=[45351] 00:19:23.874 bw ( KiB/s): min=17168, max=20776, per=31.60%, avg=18972.00, stdev=2551.24, samples=2 00:19:23.874 iops : min= 4292, max= 5194, avg=4743.00, stdev=637.81, samples=2 00:19:23.874 lat (msec) : 4=0.66%, 10=28.52%, 20=59.20%, 50=11.62% 00:19:23.874 cpu : usr=7.61%, sys=11.56%, ctx=348, majf=0, minf=13 00:19:23.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:23.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.874 issued rwts: total=4608,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.874 job2: (groupid=0, jobs=1): err= 0: pid=3794100: Tue Jul 23 01:40:36 2024 00:19:23.874 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:19:23.874 slat (usec): min=2, max=23123, avg=139.31, stdev=921.77 00:19:23.874 clat (usec): min=6907, max=50920, avg=18155.52, stdev=7423.40 00:19:23.874 lat (usec): min=6911, max=50939, avg=18294.82, stdev=7485.59 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[ 8979], 5.00th=[10421], 10.00th=[12125], 20.00th=[12518], 00:19:23.874 | 30.00th=[13829], 40.00th=[15926], 50.00th=[16188], 60.00th=[17171], 00:19:23.874 | 70.00th=[19792], 80.00th=[20841], 90.00th=[25035], 95.00th=[38536], 00:19:23.874 | 99.00th=[43779], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:19:23.874 | 99.99th=[51119] 00:19:23.874 write: IOPS=3535, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1009msec); 0 zone resets 00:19:23.874 slat (usec): min=3, max=14620, avg=150.04, stdev=842.08 00:19:23.874 clat (usec): min=6175, max=87999, avg=20166.94, stdev=14856.23 00:19:23.874 lat (usec): min=6180, max=88011, avg=20316.98, stdev=14950.78 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[ 6390], 5.00th=[ 8455], 10.00th=[11600], 20.00th=[12125], 00:19:23.874 | 30.00th=[12518], 40.00th=[12911], 50.00th=[15008], 60.00th=[17695], 00:19:23.874 | 70.00th=[21627], 80.00th=[22938], 90.00th=[31589], 95.00th=[48497], 00:19:23.874 | 99.00th=[86508], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:19:23.874 | 99.99th=[87557] 00:19:23.874 bw ( KiB/s): min=11136, max=16384, per=22.92%, avg=13760.00, stdev=3710.90, samples=2 00:19:23.874 iops : min= 2784, max= 4096, avg=3440.00, stdev=927.72, samples=2 00:19:23.874 lat (msec) : 10=5.38%, 20=61.85%, 50=30.14%, 100=2.64% 00:19:23.874 cpu : usr=4.66%, sys=5.65%, ctx=332, majf=0, minf=13 00:19:23.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:23.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.874 issued rwts: total=3072,3567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.874 job3: (groupid=0, jobs=1): err= 0: pid=3794101: Tue Jul 23 01:40:36 2024 00:19:23.874 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:19:23.874 slat (usec): min=3, max=7825, avg=134.94, stdev=686.29 00:19:23.874 clat (usec): min=10776, max=29537, avg=17654.95, stdev=3784.64 00:19:23.874 lat (usec): min=10786, max=30463, avg=17789.89, stdev=3843.63 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[11207], 5.00th=[11863], 10.00th=[13435], 20.00th=[13960], 00:19:23.874 | 30.00th=[14877], 40.00th=[16909], 50.00th=[17433], 60.00th=[17957], 00:19:23.874 | 70.00th=[18744], 80.00th=[21365], 90.00th=[23200], 95.00th=[24773], 00:19:23.874 | 99.00th=[26084], 99.50th=[27657], 99.90th=[28967], 99.95th=[29230], 00:19:23.874 | 99.99th=[29492] 00:19:23.874 write: IOPS=3679, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1005msec); 0 zone resets 00:19:23.874 slat (usec): min=4, max=11512, avg=127.54, stdev=762.18 00:19:23.874 clat (usec): min=3148, max=33274, avg=17233.23, stdev=3569.45 00:19:23.874 lat (usec): min=5623, max=33295, avg=17360.77, stdev=3633.21 00:19:23.874 clat percentiles (usec): 00:19:23.874 | 1.00th=[ 8586], 5.00th=[11600], 10.00th=[13173], 20.00th=[14353], 00:19:23.874 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16450], 60.00th=[17433], 00:19:23.874 | 70.00th=[19006], 80.00th=[20317], 90.00th=[22414], 95.00th=[23462], 00:19:23.874 | 99.00th=[24773], 99.50th=[26346], 99.90th=[28967], 99.95th=[31327], 00:19:23.874 | 99.99th=[33162] 00:19:23.874 bw ( KiB/s): min=12288, max=16384, per=23.87%, avg=14336.00, stdev=2896.31, samples=2 00:19:23.874 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:23.874 lat (msec) : 4=0.01%, 10=1.13%, 20=74.33%, 50=24.53% 00:19:23.874 cpu : usr=7.37%, sys=7.47%, ctx=299, majf=0, minf=13 00:19:23.874 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:23.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.874 issued rwts: total=3584,3698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.874 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.874 00:19:23.874 Run status group 0 (all jobs): 00:19:23.874 READ: bw=54.1MiB/s (56.7MB/s), 10.8MiB/s-17.8MiB/s (11.3MB/s-18.6MB/s), io=54.8MiB (57.5MB), run=1005-1013msec 00:19:23.874 WRITE: bw=58.6MiB/s (61.5MB/s), 11.9MiB/s-18.8MiB/s (12.5MB/s-19.7MB/s), io=59.4MiB (62.3MB), run=1005-1013msec 00:19:23.874 00:19:23.874 Disk stats (read/write): 00:19:23.874 nvme0n1: ios=2161/2560, merge=0/0, ticks=16322/15076, in_queue=31398, util=98.10% 00:19:23.874 nvme0n2: ios=3634/4095, merge=0/0, ticks=30957/36978, in_queue=67935, util=98.07% 00:19:23.874 nvme0n3: ios=3072/3159, merge=0/0, ticks=30080/24654, in_queue=54734, util=88.82% 00:19:23.874 nvme0n4: ios=3072/3103, merge=0/0, ticks=20858/21707, in_queue=42565, util=89.57% 00:19:23.874 01:40:36 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:23.874 [global] 00:19:23.874 thread=1 00:19:23.874 invalidate=1 00:19:23.874 rw=randwrite 00:19:23.874 time_based=1 00:19:23.874 runtime=1 00:19:23.874 ioengine=libaio 00:19:23.874 direct=1 00:19:23.874 bs=4096 00:19:23.874 iodepth=128 00:19:23.874 norandommap=0 00:19:23.874 numjobs=1 00:19:23.874 00:19:23.874 verify_dump=1 00:19:23.874 verify_backlog=512 00:19:23.874 verify_state_save=0 00:19:23.874 do_verify=1 00:19:23.874 verify=crc32c-intel 00:19:23.874 [job0] 00:19:23.874 filename=/dev/nvme0n1 00:19:23.874 [job1] 00:19:23.874 filename=/dev/nvme0n2 00:19:23.874 [job2] 00:19:23.874 filename=/dev/nvme0n3 00:19:23.875 [job3] 00:19:23.875 filename=/dev/nvme0n4 00:19:23.875 Could not set queue depth (nvme0n1) 00:19:23.875 Could not set queue depth (nvme0n2) 00:19:23.875 Could not set queue depth (nvme0n3) 00:19:23.875 Could not set queue depth (nvme0n4) 00:19:24.134 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.134 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.134 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.134 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.134 fio-3.35 00:19:24.134 Starting 4 threads 00:19:25.517 00:19:25.517 job0: (groupid=0, jobs=1): err= 0: pid=3794340: Tue Jul 23 01:40:38 2024 00:19:25.517 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:19:25.517 slat (usec): min=3, max=8796, avg=162.67, stdev=768.83 00:19:25.517 clat (usec): min=8158, max=38730, avg=21456.22, stdev=6230.31 00:19:25.517 lat (usec): min=8711, max=38745, avg=21618.90, stdev=6234.46 00:19:25.517 clat percentiles (usec): 00:19:25.517 | 1.00th=[ 9503], 5.00th=[11338], 10.00th=[15533], 20.00th=[17957], 00:19:25.517 | 30.00th=[19006], 40.00th=[19006], 50.00th=[19268], 60.00th=[20055], 00:19:25.517 | 70.00th=[22676], 80.00th=[26084], 90.00th=[31851], 95.00th=[33162], 00:19:25.517 | 99.00th=[38011], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:19:25.517 | 99.99th=[38536] 00:19:25.518 write: IOPS=3456, BW=13.5MiB/s (14.2MB/s)(13.5MiB/1003msec); 0 zone resets 00:19:25.518 slat (usec): min=4, max=5836, avg=135.55, stdev=649.09 00:19:25.518 clat (usec): min=665, max=37085, avg=17502.59, stdev=5338.85 00:19:25.518 lat (usec): min=5253, max=37094, avg=17638.14, stdev=5339.37 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[ 7963], 5.00th=[11469], 10.00th=[13304], 20.00th=[14615], 00:19:25.518 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[16581], 00:19:25.518 | 70.00th=[17957], 80.00th=[21365], 90.00th=[23462], 95.00th=[28705], 00:19:25.518 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:19:25.518 | 99.99th=[36963] 00:19:25.518 bw ( KiB/s): min=12288, max=14424, per=23.23%, avg=13356.00, stdev=1510.38, samples=2 00:19:25.518 iops : min= 3072, max= 3606, avg=3339.00, stdev=377.60, samples=2 00:19:25.518 lat (usec) : 750=0.02% 00:19:25.518 lat (msec) : 10=2.10%, 20=66.72%, 50=31.17% 00:19:25.518 cpu : usr=3.59%, sys=6.19%, ctx=337, majf=0, minf=9 00:19:25.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:25.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.518 issued rwts: total=3072,3467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.518 job1: (groupid=0, jobs=1): err= 0: pid=3794345: Tue Jul 23 01:40:38 2024 00:19:25.518 read: IOPS=3458, BW=13.5MiB/s (14.2MB/s)(13.6MiB/1007msec) 00:19:25.518 slat (usec): min=2, max=8678, avg=134.06, stdev=784.29 00:19:25.518 clat (usec): min=692, max=34776, avg=16504.55, stdev=4967.76 00:19:25.518 lat (usec): min=6415, max=34789, avg=16638.61, stdev=5017.43 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[ 7177], 5.00th=[ 9896], 10.00th=[10028], 20.00th=[11863], 00:19:25.518 | 30.00th=[13173], 40.00th=[15008], 50.00th=[16319], 60.00th=[17695], 00:19:25.518 | 70.00th=[19006], 80.00th=[20055], 90.00th=[22938], 95.00th=[25822], 00:19:25.518 | 99.00th=[28967], 99.50th=[29230], 99.90th=[34341], 99.95th=[34866], 00:19:25.518 | 99.99th=[34866] 00:19:25.518 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:19:25.518 slat (usec): min=3, max=10027, avg=140.51, stdev=766.09 00:19:25.518 clat (usec): min=5747, max=47786, avg=19517.12, stdev=6937.60 00:19:25.518 lat (usec): min=5756, max=47800, avg=19657.63, stdev=6990.52 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[ 8848], 5.00th=[10552], 10.00th=[12125], 20.00th=[14091], 00:19:25.518 | 30.00th=[16450], 40.00th=[18220], 50.00th=[18482], 60.00th=[19268], 00:19:25.518 | 70.00th=[19792], 80.00th=[22676], 90.00th=[27395], 95.00th=[33424], 00:19:25.518 | 99.00th=[46400], 99.50th=[46924], 99.90th=[47973], 99.95th=[47973], 00:19:25.518 | 99.99th=[47973] 00:19:25.518 bw ( KiB/s): min=13176, max=15496, per=24.93%, avg=14336.00, stdev=1640.49, samples=2 00:19:25.518 iops : min= 3294, max= 3874, avg=3584.00, stdev=410.12, samples=2 00:19:25.518 lat (usec) : 750=0.01% 00:19:25.518 lat (msec) : 10=5.36%, 20=70.67%, 50=23.96% 00:19:25.518 cpu : usr=4.47%, sys=6.56%, ctx=364, majf=0, minf=11 00:19:25.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:25.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.518 issued rwts: total=3483,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.518 job2: (groupid=0, jobs=1): err= 0: pid=3794364: Tue Jul 23 01:40:38 2024 00:19:25.518 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:19:25.518 slat (usec): min=2, max=9635, avg=133.93, stdev=767.79 00:19:25.518 clat (usec): min=9020, max=34291, avg=16933.27, stdev=3823.12 00:19:25.518 lat (usec): min=9041, max=34323, avg=17067.20, stdev=3866.58 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12780], 20.00th=[13173], 00:19:25.518 | 30.00th=[14222], 40.00th=[15533], 50.00th=[16581], 60.00th=[17695], 00:19:25.518 | 70.00th=[18482], 80.00th=[19530], 90.00th=[22414], 95.00th=[23462], 00:19:25.518 | 99.00th=[28705], 99.50th=[28967], 99.90th=[29230], 99.95th=[29230], 00:19:25.518 | 99.99th=[34341] 00:19:25.518 write: IOPS=3818, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1006msec); 0 zone resets 00:19:25.518 slat (usec): min=3, max=9920, avg=126.23, stdev=748.32 00:19:25.518 clat (usec): min=1710, max=34043, avg=17177.32, stdev=4737.92 00:19:25.518 lat (usec): min=6213, max=34051, avg=17303.56, stdev=4799.24 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[ 6652], 5.00th=[11207], 10.00th=[11731], 20.00th=[12518], 00:19:25.518 | 30.00th=[14091], 40.00th=[16581], 50.00th=[17695], 60.00th=[18482], 00:19:25.518 | 70.00th=[19006], 80.00th=[19530], 90.00th=[22414], 95.00th=[26084], 00:19:25.518 | 99.00th=[31327], 99.50th=[33162], 99.90th=[33817], 99.95th=[33817], 00:19:25.518 | 99.99th=[33817] 00:19:25.518 bw ( KiB/s): min=13840, max=15864, per=25.83%, avg=14852.00, stdev=1431.18, samples=2 00:19:25.518 iops : min= 3460, max= 3966, avg=3713.00, stdev=357.80, samples=2 00:19:25.518 lat (msec) : 2=0.01%, 10=2.15%, 20=79.97%, 50=17.86% 00:19:25.518 cpu : usr=5.47%, sys=5.87%, ctx=330, majf=0, minf=9 00:19:25.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:25.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.518 issued rwts: total=3584,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.518 job3: (groupid=0, jobs=1): err= 0: pid=3794375: Tue Jul 23 01:40:38 2024 00:19:25.518 read: IOPS=3235, BW=12.6MiB/s (13.3MB/s)(12.7MiB/1003msec) 00:19:25.518 slat (usec): min=3, max=12258, avg=164.77, stdev=833.73 00:19:25.518 clat (usec): min=734, max=39768, avg=20248.91, stdev=4776.02 00:19:25.518 lat (usec): min=3530, max=39779, avg=20413.68, stdev=4825.55 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[ 8225], 5.00th=[14484], 10.00th=[15664], 20.00th=[16581], 00:19:25.518 | 30.00th=[17171], 40.00th=[17957], 50.00th=[19530], 60.00th=[21627], 00:19:25.518 | 70.00th=[22938], 80.00th=[23462], 90.00th=[25035], 95.00th=[27919], 00:19:25.518 | 99.00th=[33817], 99.50th=[39584], 99.90th=[39584], 99.95th=[39584], 00:19:25.518 | 99.99th=[39584] 00:19:25.518 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:25.518 slat (usec): min=4, max=5111, avg=119.45, stdev=521.13 00:19:25.518 clat (usec): min=9265, max=32977, avg=16911.88, stdev=4037.78 00:19:25.518 lat (usec): min=10031, max=32984, avg=17031.33, stdev=4070.46 00:19:25.518 clat percentiles (usec): 00:19:25.518 | 1.00th=[11600], 5.00th=[12256], 10.00th=[12911], 20.00th=[13566], 00:19:25.518 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15795], 60.00th=[18220], 00:19:25.518 | 70.00th=[18744], 80.00th=[19268], 90.00th=[21627], 95.00th=[25560], 00:19:25.518 | 99.00th=[30016], 99.50th=[31065], 99.90th=[32113], 99.95th=[32900], 00:19:25.518 | 99.99th=[32900] 00:19:25.518 bw ( KiB/s): min=13552, max=15120, per=24.93%, avg=14336.00, stdev=1108.74, samples=2 00:19:25.518 iops : min= 3388, max= 3780, avg=3584.00, stdev=277.19, samples=2 00:19:25.518 lat (usec) : 750=0.01% 00:19:25.518 lat (msec) : 4=0.06%, 10=0.57%, 20=69.20%, 50=30.15% 00:19:25.518 cpu : usr=4.59%, sys=6.99%, ctx=393, majf=0, minf=21 00:19:25.518 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:25.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.518 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.518 issued rwts: total=3245,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.518 00:19:25.518 Run status group 0 (all jobs): 00:19:25.518 READ: bw=51.9MiB/s (54.4MB/s), 12.0MiB/s-13.9MiB/s (12.5MB/s-14.6MB/s), io=52.3MiB (54.8MB), run=1003-1007msec 00:19:25.518 WRITE: bw=56.2MiB/s (58.9MB/s), 13.5MiB/s-14.9MiB/s (14.2MB/s-15.6MB/s), io=56.5MiB (59.3MB), run=1003-1007msec 00:19:25.518 00:19:25.518 Disk stats (read/write): 00:19:25.518 nvme0n1: ios=2580/2797, merge=0/0, ticks=14258/11522, in_queue=25780, util=87.37% 00:19:25.518 nvme0n2: ios=3016/3072, merge=0/0, ticks=22863/28851, in_queue=51714, util=93.09% 00:19:25.518 nvme0n3: ios=3047/3072, merge=0/0, ticks=25252/24665, in_queue=49917, util=98.53% 00:19:25.518 nvme0n4: ios=2762/3072, merge=0/0, ticks=18524/16029, in_queue=34553, util=98.52% 00:19:25.518 01:40:38 -- target/fio.sh@55 -- # sync 00:19:25.518 01:40:38 -- target/fio.sh@59 -- # fio_pid=3794572 00:19:25.518 01:40:38 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:25.518 01:40:38 -- target/fio.sh@61 -- # sleep 3 00:19:25.518 [global] 00:19:25.518 thread=1 00:19:25.518 invalidate=1 00:19:25.518 rw=read 00:19:25.518 time_based=1 00:19:25.518 runtime=10 00:19:25.518 ioengine=libaio 00:19:25.518 direct=1 00:19:25.518 bs=4096 00:19:25.518 iodepth=1 00:19:25.518 norandommap=1 00:19:25.518 numjobs=1 00:19:25.518 00:19:25.518 [job0] 00:19:25.518 filename=/dev/nvme0n1 00:19:25.518 [job1] 00:19:25.518 filename=/dev/nvme0n2 00:19:25.518 [job2] 00:19:25.518 filename=/dev/nvme0n3 00:19:25.518 [job3] 00:19:25.518 filename=/dev/nvme0n4 00:19:25.518 Could not set queue depth (nvme0n1) 00:19:25.518 Could not set queue depth (nvme0n2) 00:19:25.518 Could not set queue depth (nvme0n3) 00:19:25.518 Could not set queue depth (nvme0n4) 00:19:25.518 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.518 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.518 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.518 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.518 fio-3.35 00:19:25.518 Starting 4 threads 00:19:28.799 01:40:41 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:28.799 01:40:41 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:28.799 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=323584, buflen=4096 00:19:28.799 fio: pid=3794698, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.799 01:40:41 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.799 01:40:41 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:28.799 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=684032, buflen=4096 00:19:28.799 fio: pid=3794697, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.057 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=3571712, buflen=4096 00:19:29.057 fio: pid=3794695, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.057 01:40:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.057 01:40:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:29.316 01:40:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.316 01:40:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:29.316 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=29487104, buflen=4096 00:19:29.316 fio: pid=3794696, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.316 00:19:29.316 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794695: Tue Jul 23 01:40:42 2024 00:19:29.316 read: IOPS=255, BW=1022KiB/s (1047kB/s)(3488KiB/3412msec) 00:19:29.316 slat (nsec): min=6102, max=38781, avg=8778.90, stdev=5958.84 00:19:29.316 clat (usec): min=338, max=42014, avg=3875.92, stdev=11344.35 00:19:29.316 lat (usec): min=345, max=42029, avg=3884.70, stdev=11348.63 00:19:29.316 clat percentiles (usec): 00:19:29.316 | 1.00th=[ 367], 5.00th=[ 371], 10.00th=[ 379], 20.00th=[ 388], 00:19:29.316 | 30.00th=[ 396], 40.00th=[ 400], 50.00th=[ 404], 60.00th=[ 408], 00:19:29.316 | 70.00th=[ 412], 80.00th=[ 424], 90.00th=[ 457], 95.00th=[41157], 00:19:29.316 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:19:29.316 | 99.99th=[42206] 00:19:29.316 bw ( KiB/s): min= 96, max= 136, per=1.18%, avg=106.67, stdev=17.28, samples=6 00:19:29.316 iops : min= 24, max= 34, avg=26.67, stdev= 4.32, samples=6 00:19:29.316 lat (usec) : 500=90.49%, 750=0.69% 00:19:29.316 lat (msec) : 4=0.11%, 20=0.11%, 50=8.48% 00:19:29.316 cpu : usr=0.06%, sys=0.41%, ctx=874, majf=0, minf=1 00:19:29.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 issued rwts: total=873,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.316 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794696: Tue Jul 23 01:40:42 2024 00:19:29.316 read: IOPS=1942, BW=7768KiB/s (7954kB/s)(28.1MiB/3707msec) 00:19:29.316 slat (usec): min=4, max=18520, avg=17.81, stdev=271.17 00:19:29.316 clat (usec): min=270, max=45767, avg=490.64, stdev=2177.86 00:19:29.316 lat (usec): min=276, max=45773, avg=508.44, stdev=2195.24 00:19:29.316 clat percentiles (usec): 00:19:29.316 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 326], 00:19:29.316 | 30.00th=[ 347], 40.00th=[ 363], 50.00th=[ 375], 60.00th=[ 388], 00:19:29.316 | 70.00th=[ 400], 80.00th=[ 412], 90.00th=[ 429], 95.00th=[ 461], 00:19:29.316 | 99.00th=[ 603], 99.50th=[ 824], 99.90th=[42206], 99.95th=[42206], 00:19:29.316 | 99.99th=[45876] 00:19:29.316 bw ( KiB/s): min= 3384, max=11096, per=89.71%, avg=8051.29, stdev=2844.45, samples=7 00:19:29.316 iops : min= 846, max= 2774, avg=2012.71, stdev=711.20, samples=7 00:19:29.316 lat (usec) : 500=96.38%, 750=2.97%, 1000=0.28% 00:19:29.316 lat (msec) : 2=0.06%, 4=0.03%, 50=0.28% 00:19:29.316 cpu : usr=1.54%, sys=3.18%, ctx=7204, majf=0, minf=1 00:19:29.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 issued rwts: total=7200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.316 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794697: Tue Jul 23 01:40:42 2024 00:19:29.316 read: IOPS=53, BW=211KiB/s (216kB/s)(668KiB/3165msec) 00:19:29.316 slat (usec): min=5, max=4838, avg=46.20, stdev=372.11 00:19:29.316 clat (usec): min=364, max=42185, avg=18764.48, stdev=20405.99 00:19:29.316 lat (usec): min=371, max=46014, avg=18810.87, stdev=20447.64 00:19:29.316 clat percentiles (usec): 00:19:29.316 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 383], 20.00th=[ 392], 00:19:29.316 | 30.00th=[ 396], 40.00th=[ 404], 50.00th=[ 420], 60.00th=[41157], 00:19:29.316 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:19:29.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:29.316 | 99.99th=[42206] 00:19:29.316 bw ( KiB/s): min= 96, max= 824, per=2.42%, avg=217.33, stdev=297.20, samples=6 00:19:29.316 iops : min= 24, max= 206, avg=54.33, stdev=74.30, samples=6 00:19:29.316 lat (usec) : 500=54.76% 00:19:29.316 lat (msec) : 50=44.64% 00:19:29.316 cpu : usr=0.19%, sys=0.00%, ctx=169, majf=0, minf=1 00:19:29.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 issued rwts: total=168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.316 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3794698: Tue Jul 23 01:40:42 2024 00:19:29.316 read: IOPS=27, BW=109KiB/s (111kB/s)(316KiB/2903msec) 00:19:29.316 slat (nsec): min=10163, max=39824, avg=22909.81, stdev=9666.14 00:19:29.316 clat (usec): min=444, max=42005, avg=36427.15, stdev=12958.06 00:19:29.316 lat (usec): min=460, max=42020, avg=36450.15, stdev=12959.66 00:19:29.316 clat percentiles (usec): 00:19:29.316 | 1.00th=[ 445], 5.00th=[ 498], 10.00th=[ 570], 20.00th=[41157], 00:19:29.316 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:29.316 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:19:29.316 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:29.316 | 99.99th=[42206] 00:19:29.316 bw ( KiB/s): min= 96, max= 136, per=1.23%, avg=110.40, stdev=17.34, samples=5 00:19:29.316 iops : min= 24, max= 34, avg=27.60, stdev= 4.34, samples=5 00:19:29.316 lat (usec) : 500=5.00%, 750=6.25% 00:19:29.316 lat (msec) : 50=87.50% 00:19:29.316 cpu : usr=0.00%, sys=0.10%, ctx=81, majf=0, minf=1 00:19:29.316 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.316 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.316 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.316 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:29.316 00:19:29.316 Run status group 0 (all jobs): 00:19:29.316 READ: bw=8974KiB/s (9190kB/s), 109KiB/s-7768KiB/s (111kB/s-7954kB/s), io=32.5MiB (34.1MB), run=2903-3707msec 00:19:29.316 00:19:29.316 Disk stats (read/write): 00:19:29.316 nvme0n1: ios=821/0, merge=0/0, ticks=4468/0, in_queue=4468, util=99.89% 00:19:29.316 nvme0n2: ios=7242/0, merge=0/0, ticks=3530/0, in_queue=3530, util=99.20% 00:19:29.316 nvme0n3: ios=166/0, merge=0/0, ticks=3095/0, in_queue=3095, util=96.63% 00:19:29.316 nvme0n4: ios=124/0, merge=0/0, ticks=4015/0, in_queue=4015, util=99.90% 00:19:29.575 01:40:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.575 01:40:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:29.833 01:40:42 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.833 01:40:42 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:30.095 01:40:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.095 01:40:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:30.362 01:40:43 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.362 01:40:43 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:30.620 01:40:43 -- target/fio.sh@69 -- # fio_status=0 00:19:30.620 01:40:43 -- target/fio.sh@70 -- # wait 3794572 00:19:30.620 01:40:43 -- target/fio.sh@70 -- # fio_status=4 00:19:30.620 01:40:43 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.620 01:40:43 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.620 01:40:43 -- common/autotest_common.sh@1198 -- # local i=0 00:19:30.620 01:40:43 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:30.620 01:40:43 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.620 01:40:43 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:30.620 01:40:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.620 01:40:43 -- common/autotest_common.sh@1210 -- # return 0 00:19:30.620 01:40:43 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:30.620 01:40:43 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:30.620 nvmf hotplug test: fio failed as expected 00:19:30.620 01:40:43 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.877 01:40:43 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:30.877 01:40:43 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:30.877 01:40:43 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:30.877 01:40:43 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:30.877 01:40:43 -- target/fio.sh@91 -- # nvmftestfini 00:19:30.877 01:40:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:30.877 01:40:43 -- nvmf/common.sh@116 -- # sync 00:19:30.877 01:40:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:30.877 01:40:43 -- nvmf/common.sh@119 -- # set +e 00:19:30.877 01:40:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:30.877 01:40:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:30.877 rmmod nvme_tcp 00:19:30.877 rmmod nvme_fabrics 00:19:30.877 rmmod nvme_keyring 00:19:30.877 01:40:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:30.877 01:40:43 -- nvmf/common.sh@123 -- # set -e 00:19:30.877 01:40:43 -- nvmf/common.sh@124 -- # return 0 00:19:30.877 01:40:43 -- nvmf/common.sh@477 -- # '[' -n 3792521 ']' 00:19:30.877 01:40:43 -- nvmf/common.sh@478 -- # killprocess 3792521 00:19:30.877 01:40:43 -- common/autotest_common.sh@926 -- # '[' -z 3792521 ']' 00:19:30.877 01:40:43 -- common/autotest_common.sh@930 -- # kill -0 3792521 00:19:30.877 01:40:43 -- common/autotest_common.sh@931 -- # uname 00:19:30.877 01:40:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:30.877 01:40:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3792521 00:19:31.135 01:40:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.135 01:40:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.135 01:40:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3792521' 00:19:31.135 killing process with pid 3792521 00:19:31.135 01:40:43 -- common/autotest_common.sh@945 -- # kill 3792521 00:19:31.135 01:40:43 -- common/autotest_common.sh@950 -- # wait 3792521 00:19:31.135 01:40:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.135 01:40:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.135 01:40:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.135 01:40:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.135 01:40:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.135 01:40:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.135 01:40:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.135 01:40:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.668 01:40:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:33.668 00:19:33.668 real 0m23.648s 00:19:33.668 user 1m22.286s 00:19:33.668 sys 0m6.724s 00:19:33.668 01:40:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.668 01:40:46 -- common/autotest_common.sh@10 -- # set +x 00:19:33.668 ************************************ 00:19:33.668 END TEST nvmf_fio_target 00:19:33.668 ************************************ 00:19:33.668 01:40:46 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.668 01:40:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:33.668 01:40:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:33.668 01:40:46 -- common/autotest_common.sh@10 -- # set +x 00:19:33.668 ************************************ 00:19:33.668 START TEST nvmf_bdevio 00:19:33.668 ************************************ 00:19:33.668 01:40:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.668 * Looking for test storage... 00:19:33.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.668 01:40:46 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.668 01:40:46 -- nvmf/common.sh@7 -- # uname -s 00:19:33.668 01:40:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.668 01:40:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.668 01:40:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.668 01:40:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.668 01:40:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.668 01:40:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.668 01:40:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.668 01:40:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.668 01:40:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.668 01:40:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.668 01:40:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.668 01:40:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.668 01:40:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.668 01:40:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.668 01:40:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:33.668 01:40:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.668 01:40:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.668 01:40:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.668 01:40:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.668 01:40:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.668 01:40:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.668 01:40:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.668 01:40:46 -- paths/export.sh@5 -- # export PATH 00:19:33.668 01:40:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.668 01:40:46 -- nvmf/common.sh@46 -- # : 0 00:19:33.668 01:40:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:33.668 01:40:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:33.668 01:40:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:33.668 01:40:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.668 01:40:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.668 01:40:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:33.668 01:40:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:33.668 01:40:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:33.668 01:40:46 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.668 01:40:46 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.669 01:40:46 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:33.669 01:40:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:33.669 01:40:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.669 01:40:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:33.669 01:40:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:33.669 01:40:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:33.669 01:40:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.669 01:40:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:33.669 01:40:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.669 01:40:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:33.669 01:40:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:33.669 01:40:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:33.669 01:40:46 -- common/autotest_common.sh@10 -- # set +x 00:19:35.571 01:40:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:35.571 01:40:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:35.571 01:40:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:35.571 01:40:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:35.571 01:40:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:35.571 01:40:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:35.571 01:40:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:35.571 01:40:48 -- nvmf/common.sh@294 -- # net_devs=() 00:19:35.571 01:40:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:35.571 01:40:48 -- nvmf/common.sh@295 -- # e810=() 00:19:35.571 01:40:48 -- nvmf/common.sh@295 -- # local -ga e810 00:19:35.571 01:40:48 -- nvmf/common.sh@296 -- # x722=() 00:19:35.571 01:40:48 -- nvmf/common.sh@296 -- # local -ga x722 00:19:35.571 01:40:48 -- nvmf/common.sh@297 -- # mlx=() 00:19:35.571 01:40:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:35.571 01:40:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:35.571 01:40:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.571 01:40:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:35.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:35.571 01:40:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:35.571 01:40:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:35.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:35.571 01:40:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.571 01:40:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.571 01:40:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.571 01:40:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:35.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:35.571 01:40:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:35.571 01:40:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:35.571 01:40:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:35.571 01:40:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:35.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:35.571 01:40:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:35.571 01:40:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:35.571 01:40:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:35.571 01:40:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:35.571 01:40:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:35.571 01:40:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:35.571 01:40:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:35.571 01:40:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:35.571 01:40:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:35.571 01:40:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:35.571 01:40:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:35.571 01:40:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:35.571 01:40:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:35.571 01:40:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:35.571 01:40:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:35.571 01:40:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:35.571 01:40:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:35.571 01:40:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:35.571 01:40:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:35.571 01:40:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:35.571 01:40:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:35.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:35.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:19:35.571 00:19:35.571 --- 10.0.0.2 ping statistics --- 00:19:35.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.571 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:35.571 01:40:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:35.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:35.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:19:35.571 00:19:35.571 --- 10.0.0.1 ping statistics --- 00:19:35.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:35.571 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:35.571 01:40:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:35.571 01:40:48 -- nvmf/common.sh@410 -- # return 0 00:19:35.571 01:40:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:35.571 01:40:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:35.571 01:40:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:35.571 01:40:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:35.571 01:40:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:35.571 01:40:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:35.571 01:40:48 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:35.571 01:40:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:35.571 01:40:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:35.571 01:40:48 -- common/autotest_common.sh@10 -- # set +x 00:19:35.571 01:40:48 -- nvmf/common.sh@469 -- # nvmfpid=3797299 00:19:35.571 01:40:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:35.571 01:40:48 -- nvmf/common.sh@470 -- # waitforlisten 3797299 00:19:35.571 01:40:48 -- common/autotest_common.sh@819 -- # '[' -z 3797299 ']' 00:19:35.571 01:40:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.571 01:40:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:35.571 01:40:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.571 01:40:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:35.571 01:40:48 -- common/autotest_common.sh@10 -- # set +x 00:19:35.571 [2024-07-23 01:40:48.495814] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:35.571 [2024-07-23 01:40:48.495894] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.571 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.571 [2024-07-23 01:40:48.563729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:35.571 [2024-07-23 01:40:48.656636] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:35.571 [2024-07-23 01:40:48.656798] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:35.571 [2024-07-23 01:40:48.656817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:35.571 [2024-07-23 01:40:48.656831] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:35.572 [2024-07-23 01:40:48.656926] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:35.572 [2024-07-23 01:40:48.656985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:35.572 [2024-07-23 01:40:48.657055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:35.572 [2024-07-23 01:40:48.657059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:36.507 01:40:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:36.507 01:40:49 -- common/autotest_common.sh@852 -- # return 0 00:19:36.507 01:40:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:36.507 01:40:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 01:40:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:36.507 01:40:49 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:36.507 01:40:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 [2024-07-23 01:40:49.464167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.507 01:40:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.507 01:40:49 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:36.507 01:40:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 Malloc0 00:19:36.507 01:40:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.507 01:40:49 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:36.507 01:40:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 01:40:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.507 01:40:49 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:36.507 01:40:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 01:40:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.507 01:40:49 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:36.507 01:40:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.507 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:19:36.507 [2024-07-23 01:40:49.515429] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.507 01:40:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.507 01:40:49 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:36.507 01:40:49 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:36.507 01:40:49 -- nvmf/common.sh@520 -- # config=() 00:19:36.507 01:40:49 -- nvmf/common.sh@520 -- # local subsystem config 00:19:36.507 01:40:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:36.507 01:40:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:36.507 { 00:19:36.507 "params": { 00:19:36.507 "name": "Nvme$subsystem", 00:19:36.507 "trtype": "$TEST_TRANSPORT", 00:19:36.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:36.507 "adrfam": "ipv4", 00:19:36.507 "trsvcid": "$NVMF_PORT", 00:19:36.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:36.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:36.507 "hdgst": ${hdgst:-false}, 00:19:36.507 "ddgst": ${ddgst:-false} 00:19:36.507 }, 00:19:36.507 "method": "bdev_nvme_attach_controller" 00:19:36.507 } 00:19:36.507 EOF 00:19:36.507 )") 00:19:36.507 01:40:49 -- nvmf/common.sh@542 -- # cat 00:19:36.507 01:40:49 -- nvmf/common.sh@544 -- # jq . 00:19:36.507 01:40:49 -- nvmf/common.sh@545 -- # IFS=, 00:19:36.507 01:40:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:36.507 "params": { 00:19:36.507 "name": "Nvme1", 00:19:36.507 "trtype": "tcp", 00:19:36.507 "traddr": "10.0.0.2", 00:19:36.507 "adrfam": "ipv4", 00:19:36.507 "trsvcid": "4420", 00:19:36.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:36.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:36.507 "hdgst": false, 00:19:36.507 "ddgst": false 00:19:36.507 }, 00:19:36.507 "method": "bdev_nvme_attach_controller" 00:19:36.507 }' 00:19:36.507 [2024-07-23 01:40:49.558269] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:36.507 [2024-07-23 01:40:49.558365] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3797401 ] 00:19:36.507 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.764 [2024-07-23 01:40:49.624261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:36.765 [2024-07-23 01:40:49.711149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.765 [2024-07-23 01:40:49.711199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:36.765 [2024-07-23 01:40:49.711202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.023 [2024-07-23 01:40:49.875959] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:37.023 [2024-07-23 01:40:49.876011] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:37.023 I/O targets: 00:19:37.023 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:37.023 00:19:37.023 00:19:37.023 CUnit - A unit testing framework for C - Version 2.1-3 00:19:37.023 http://cunit.sourceforge.net/ 00:19:37.023 00:19:37.023 00:19:37.023 Suite: bdevio tests on: Nvme1n1 00:19:37.023 Test: blockdev write read block ...passed 00:19:37.023 Test: blockdev write zeroes read block ...passed 00:19:37.023 Test: blockdev write zeroes read no split ...passed 00:19:37.023 Test: blockdev write zeroes read split ...passed 00:19:37.023 Test: blockdev write zeroes read split partial ...passed 00:19:37.023 Test: blockdev reset ...[2024-07-23 01:40:50.091575] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.023 [2024-07-23 01:40:50.091709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c03e00 (9): Bad file descriptor 00:19:37.023 [2024-07-23 01:40:50.107423] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:37.023 passed 00:19:37.023 Test: blockdev write read 8 blocks ...passed 00:19:37.023 Test: blockdev write read size > 128k ...passed 00:19:37.023 Test: blockdev write read invalid size ...passed 00:19:37.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.283 Test: blockdev write read max offset ...passed 00:19:37.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:37.283 Test: blockdev writev readv 8 blocks ...passed 00:19:37.283 Test: blockdev writev readv 30 x 1block ...passed 00:19:37.283 Test: blockdev writev readv block ...passed 00:19:37.283 Test: blockdev writev readv size > 128k ...passed 00:19:37.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:37.283 Test: blockdev comparev and writev ...[2024-07-23 01:40:50.281916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.281951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.281975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.281992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.282382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.282407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.282429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.282445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.282829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.282853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.282875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.282890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.283279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.283303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.283324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:37.283 [2024-07-23 01:40:50.283339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:37.283 passed 00:19:37.283 Test: blockdev nvme passthru rw ...passed 00:19:37.283 Test: blockdev nvme passthru vendor specific ...[2024-07-23 01:40:50.365984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.283 [2024-07-23 01:40:50.366012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.366213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.283 [2024-07-23 01:40:50.366237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.366437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.283 [2024-07-23 01:40:50.366460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:37.283 [2024-07-23 01:40:50.366648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:37.283 [2024-07-23 01:40:50.366672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:37.283 passed 00:19:37.542 Test: blockdev nvme admin passthru ...passed 00:19:37.542 Test: blockdev copy ...passed 00:19:37.542 00:19:37.543 Run Summary: Type Total Ran Passed Failed Inactive 00:19:37.543 suites 1 1 n/a 0 0 00:19:37.543 tests 23 23 23 0 0 00:19:37.543 asserts 152 152 152 0 n/a 00:19:37.543 00:19:37.543 Elapsed time = 1.094 seconds 00:19:37.543 01:40:50 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:37.543 01:40:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.543 01:40:50 -- common/autotest_common.sh@10 -- # set +x 00:19:37.543 01:40:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.543 01:40:50 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:37.543 01:40:50 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:37.543 01:40:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:37.543 01:40:50 -- nvmf/common.sh@116 -- # sync 00:19:37.543 01:40:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:37.543 01:40:50 -- nvmf/common.sh@119 -- # set +e 00:19:37.543 01:40:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:37.543 01:40:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:37.543 rmmod nvme_tcp 00:19:37.802 rmmod nvme_fabrics 00:19:37.802 rmmod nvme_keyring 00:19:37.802 01:40:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:37.802 01:40:50 -- nvmf/common.sh@123 -- # set -e 00:19:37.802 01:40:50 -- nvmf/common.sh@124 -- # return 0 00:19:37.802 01:40:50 -- nvmf/common.sh@477 -- # '[' -n 3797299 ']' 00:19:37.802 01:40:50 -- nvmf/common.sh@478 -- # killprocess 3797299 00:19:37.802 01:40:50 -- common/autotest_common.sh@926 -- # '[' -z 3797299 ']' 00:19:37.802 01:40:50 -- common/autotest_common.sh@930 -- # kill -0 3797299 00:19:37.802 01:40:50 -- common/autotest_common.sh@931 -- # uname 00:19:37.802 01:40:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:37.802 01:40:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3797299 00:19:37.802 01:40:50 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:37.802 01:40:50 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:37.802 01:40:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3797299' 00:19:37.802 killing process with pid 3797299 00:19:37.802 01:40:50 -- common/autotest_common.sh@945 -- # kill 3797299 00:19:37.802 01:40:50 -- common/autotest_common.sh@950 -- # wait 3797299 00:19:38.062 01:40:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.062 01:40:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.062 01:40:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.062 01:40:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.062 01:40:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.062 01:40:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.062 01:40:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.062 01:40:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.967 01:40:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:39.967 00:19:39.967 real 0m6.702s 00:19:39.967 user 0m11.984s 00:19:39.967 sys 0m2.023s 00:19:39.967 01:40:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.967 01:40:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.967 ************************************ 00:19:39.967 END TEST nvmf_bdevio 00:19:39.967 ************************************ 00:19:39.967 01:40:53 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:39.967 01:40:53 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:39.967 01:40:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:39.967 01:40:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:39.967 01:40:53 -- common/autotest_common.sh@10 -- # set +x 00:19:39.967 ************************************ 00:19:39.967 START TEST nvmf_bdevio_no_huge 00:19:39.967 ************************************ 00:19:39.967 01:40:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:39.967 * Looking for test storage... 00:19:40.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:40.227 01:40:53 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:40.227 01:40:53 -- nvmf/common.sh@7 -- # uname -s 00:19:40.227 01:40:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.227 01:40:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.227 01:40:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.227 01:40:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.227 01:40:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.227 01:40:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.227 01:40:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.227 01:40:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.227 01:40:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.227 01:40:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.227 01:40:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.227 01:40:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:40.227 01:40:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.227 01:40:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.227 01:40:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:40.227 01:40:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:40.227 01:40:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.227 01:40:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.227 01:40:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.227 01:40:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.227 01:40:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.227 01:40:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.227 01:40:53 -- paths/export.sh@5 -- # export PATH 00:19:40.227 01:40:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.228 01:40:53 -- nvmf/common.sh@46 -- # : 0 00:19:40.228 01:40:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:40.228 01:40:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:40.228 01:40:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:40.228 01:40:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.228 01:40:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.228 01:40:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:40.228 01:40:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:40.228 01:40:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:40.228 01:40:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:40.228 01:40:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:40.228 01:40:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:40.228 01:40:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:40.228 01:40:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.228 01:40:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:40.228 01:40:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:40.228 01:40:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:40.228 01:40:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.228 01:40:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.228 01:40:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.228 01:40:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:40.228 01:40:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:40.228 01:40:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:40.228 01:40:53 -- common/autotest_common.sh@10 -- # set +x 00:19:42.136 01:40:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:42.136 01:40:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:42.136 01:40:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:42.136 01:40:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:42.136 01:40:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:42.136 01:40:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:42.136 01:40:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:42.136 01:40:54 -- nvmf/common.sh@294 -- # net_devs=() 00:19:42.136 01:40:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:42.136 01:40:54 -- nvmf/common.sh@295 -- # e810=() 00:19:42.136 01:40:54 -- nvmf/common.sh@295 -- # local -ga e810 00:19:42.136 01:40:54 -- nvmf/common.sh@296 -- # x722=() 00:19:42.136 01:40:54 -- nvmf/common.sh@296 -- # local -ga x722 00:19:42.136 01:40:54 -- nvmf/common.sh@297 -- # mlx=() 00:19:42.136 01:40:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:42.136 01:40:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.136 01:40:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:42.136 01:40:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:42.136 01:40:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.136 01:40:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:42.136 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:42.136 01:40:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.136 01:40:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:42.136 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:42.136 01:40:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.136 01:40:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.136 01:40:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.136 01:40:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:42.136 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:42.136 01:40:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.136 01:40:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.136 01:40:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.136 01:40:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.136 01:40:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:42.136 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:42.136 01:40:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.136 01:40:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:42.136 01:40:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:42.136 01:40:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:42.136 01:40:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.136 01:40:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.136 01:40:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:42.136 01:40:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:42.136 01:40:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:42.136 01:40:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:42.136 01:40:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:42.136 01:40:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:42.136 01:40:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.136 01:40:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:42.136 01:40:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:42.136 01:40:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:42.136 01:40:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:42.136 01:40:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:42.136 01:40:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:42.136 01:40:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:42.136 01:40:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:42.136 01:40:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:42.136 01:40:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:42.136 01:40:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:42.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:19:42.136 00:19:42.136 --- 10.0.0.2 ping statistics --- 00:19:42.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.136 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:42.136 01:40:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:42.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:19:42.136 00:19:42.136 --- 10.0.0.1 ping statistics --- 00:19:42.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.136 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:19:42.136 01:40:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.136 01:40:55 -- nvmf/common.sh@410 -- # return 0 00:19:42.136 01:40:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.136 01:40:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.136 01:40:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:42.136 01:40:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:42.136 01:40:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.136 01:40:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:42.136 01:40:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:42.137 01:40:55 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:42.137 01:40:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:42.137 01:40:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:42.137 01:40:55 -- common/autotest_common.sh@10 -- # set +x 00:19:42.137 01:40:55 -- nvmf/common.sh@469 -- # nvmfpid=3799457 00:19:42.137 01:40:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:42.137 01:40:55 -- nvmf/common.sh@470 -- # waitforlisten 3799457 00:19:42.137 01:40:55 -- common/autotest_common.sh@819 -- # '[' -z 3799457 ']' 00:19:42.137 01:40:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.137 01:40:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:42.137 01:40:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.137 01:40:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:42.137 01:40:55 -- common/autotest_common.sh@10 -- # set +x 00:19:42.137 [2024-07-23 01:40:55.143141] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:42.137 [2024-07-23 01:40:55.143227] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:42.137 [2024-07-23 01:40:55.221181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:42.396 [2024-07-23 01:40:55.310179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:42.396 [2024-07-23 01:40:55.310341] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:42.396 [2024-07-23 01:40:55.310361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:42.396 [2024-07-23 01:40:55.310383] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:42.396 [2024-07-23 01:40:55.310463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:42.396 [2024-07-23 01:40:55.310493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:42.396 [2024-07-23 01:40:55.310547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:42.396 [2024-07-23 01:40:55.310549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.331 01:40:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:43.331 01:40:56 -- common/autotest_common.sh@852 -- # return 0 00:19:43.331 01:40:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:43.331 01:40:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 01:40:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.331 01:40:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:43.331 01:40:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 [2024-07-23 01:40:56.125802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.331 01:40:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.331 01:40:56 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:43.331 01:40:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 Malloc0 00:19:43.331 01:40:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.331 01:40:56 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:43.331 01:40:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 01:40:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.331 01:40:56 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:43.331 01:40:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 01:40:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.331 01:40:56 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:43.331 01:40:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:43.331 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 [2024-07-23 01:40:56.163638] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.331 01:40:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:43.331 01:40:56 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:43.331 01:40:56 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:43.331 01:40:56 -- nvmf/common.sh@520 -- # config=() 00:19:43.331 01:40:56 -- nvmf/common.sh@520 -- # local subsystem config 00:19:43.331 01:40:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:43.331 01:40:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:43.331 { 00:19:43.331 "params": { 00:19:43.331 "name": "Nvme$subsystem", 00:19:43.331 "trtype": "$TEST_TRANSPORT", 00:19:43.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.331 "adrfam": "ipv4", 00:19:43.331 "trsvcid": "$NVMF_PORT", 00:19:43.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.331 "hdgst": ${hdgst:-false}, 00:19:43.331 "ddgst": ${ddgst:-false} 00:19:43.331 }, 00:19:43.331 "method": "bdev_nvme_attach_controller" 00:19:43.331 } 00:19:43.331 EOF 00:19:43.331 )") 00:19:43.331 01:40:56 -- nvmf/common.sh@542 -- # cat 00:19:43.331 01:40:56 -- nvmf/common.sh@544 -- # jq . 00:19:43.331 01:40:56 -- nvmf/common.sh@545 -- # IFS=, 00:19:43.331 01:40:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:43.331 "params": { 00:19:43.331 "name": "Nvme1", 00:19:43.331 "trtype": "tcp", 00:19:43.331 "traddr": "10.0.0.2", 00:19:43.331 "adrfam": "ipv4", 00:19:43.331 "trsvcid": "4420", 00:19:43.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.331 "hdgst": false, 00:19:43.331 "ddgst": false 00:19:43.331 }, 00:19:43.331 "method": "bdev_nvme_attach_controller" 00:19:43.331 }' 00:19:43.331 [2024-07-23 01:40:56.207448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:43.331 [2024-07-23 01:40:56.207527] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3799619 ] 00:19:43.331 [2024-07-23 01:40:56.268870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:43.331 [2024-07-23 01:40:56.351663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.331 [2024-07-23 01:40:56.351714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.331 [2024-07-23 01:40:56.351717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.592 [2024-07-23 01:40:56.659187] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:43.592 [2024-07-23 01:40:56.659247] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:43.592 I/O targets: 00:19:43.592 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:43.592 00:19:43.592 00:19:43.592 CUnit - A unit testing framework for C - Version 2.1-3 00:19:43.592 http://cunit.sourceforge.net/ 00:19:43.592 00:19:43.592 00:19:43.592 Suite: bdevio tests on: Nvme1n1 00:19:43.852 Test: blockdev write read block ...passed 00:19:43.852 Test: blockdev write zeroes read block ...passed 00:19:43.852 Test: blockdev write zeroes read no split ...passed 00:19:43.852 Test: blockdev write zeroes read split ...passed 00:19:43.852 Test: blockdev write zeroes read split partial ...passed 00:19:43.852 Test: blockdev reset ...[2024-07-23 01:40:56.880079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:43.852 [2024-07-23 01:40:56.880191] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1233720 (9): Bad file descriptor 00:19:43.852 [2024-07-23 01:40:56.948802] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:43.852 passed 00:19:43.852 Test: blockdev write read 8 blocks ...passed 00:19:44.111 Test: blockdev write read size > 128k ...passed 00:19:44.111 Test: blockdev write read invalid size ...passed 00:19:44.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:44.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:44.111 Test: blockdev write read max offset ...passed 00:19:44.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:44.111 Test: blockdev writev readv 8 blocks ...passed 00:19:44.111 Test: blockdev writev readv 30 x 1block ...passed 00:19:44.111 Test: blockdev writev readv block ...passed 00:19:44.111 Test: blockdev writev readv size > 128k ...passed 00:19:44.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:44.111 Test: blockdev comparev and writev ...[2024-07-23 01:40:57.166148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.166183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.166207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.166224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.166592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.166624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.166647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.166664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.167033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.167057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.167078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.167100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.167464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.167487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:44.111 [2024-07-23 01:40:57.167508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:44.111 [2024-07-23 01:40:57.167523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:44.111 passed 00:19:44.370 Test: blockdev nvme passthru rw ...passed 00:19:44.370 Test: blockdev nvme passthru vendor specific ...[2024-07-23 01:40:57.250987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.370 [2024-07-23 01:40:57.251015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:44.370 [2024-07-23 01:40:57.251236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.370 [2024-07-23 01:40:57.251258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:44.370 [2024-07-23 01:40:57.251471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.370 [2024-07-23 01:40:57.251495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:44.370 [2024-07-23 01:40:57.251707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:44.370 [2024-07-23 01:40:57.251730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:44.370 passed 00:19:44.370 Test: blockdev nvme admin passthru ...passed 00:19:44.370 Test: blockdev copy ...passed 00:19:44.370 00:19:44.370 Run Summary: Type Total Ran Passed Failed Inactive 00:19:44.370 suites 1 1 n/a 0 0 00:19:44.370 tests 23 23 23 0 0 00:19:44.370 asserts 152 152 152 0 n/a 00:19:44.370 00:19:44.370 Elapsed time = 1.306 seconds 00:19:44.628 01:40:57 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:44.628 01:40:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.628 01:40:57 -- common/autotest_common.sh@10 -- # set +x 00:19:44.628 01:40:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.628 01:40:57 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:44.628 01:40:57 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:44.628 01:40:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:44.628 01:40:57 -- nvmf/common.sh@116 -- # sync 00:19:44.628 01:40:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:44.628 01:40:57 -- nvmf/common.sh@119 -- # set +e 00:19:44.628 01:40:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:44.628 01:40:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:44.628 rmmod nvme_tcp 00:19:44.628 rmmod nvme_fabrics 00:19:44.628 rmmod nvme_keyring 00:19:44.628 01:40:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:44.628 01:40:57 -- nvmf/common.sh@123 -- # set -e 00:19:44.628 01:40:57 -- nvmf/common.sh@124 -- # return 0 00:19:44.628 01:40:57 -- nvmf/common.sh@477 -- # '[' -n 3799457 ']' 00:19:44.628 01:40:57 -- nvmf/common.sh@478 -- # killprocess 3799457 00:19:44.628 01:40:57 -- common/autotest_common.sh@926 -- # '[' -z 3799457 ']' 00:19:44.628 01:40:57 -- common/autotest_common.sh@930 -- # kill -0 3799457 00:19:44.628 01:40:57 -- common/autotest_common.sh@931 -- # uname 00:19:44.628 01:40:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:44.628 01:40:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3799457 00:19:44.888 01:40:57 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:44.888 01:40:57 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:44.888 01:40:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3799457' 00:19:44.888 killing process with pid 3799457 00:19:44.888 01:40:57 -- common/autotest_common.sh@945 -- # kill 3799457 00:19:44.888 01:40:57 -- common/autotest_common.sh@950 -- # wait 3799457 00:19:45.147 01:40:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:45.148 01:40:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:45.148 01:40:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:45.148 01:40:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.148 01:40:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:45.148 01:40:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.148 01:40:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.148 01:40:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.055 01:41:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:47.055 00:19:47.055 real 0m7.125s 00:19:47.055 user 0m14.221s 00:19:47.055 sys 0m2.469s 00:19:47.055 01:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:47.055 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.055 ************************************ 00:19:47.055 END TEST nvmf_bdevio_no_huge 00:19:47.055 ************************************ 00:19:47.314 01:41:00 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.314 01:41:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:47.314 01:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:47.314 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:19:47.314 ************************************ 00:19:47.314 START TEST nvmf_tls 00:19:47.314 ************************************ 00:19:47.314 01:41:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:47.314 * Looking for test storage... 00:19:47.314 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:47.314 01:41:00 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.314 01:41:00 -- nvmf/common.sh@7 -- # uname -s 00:19:47.314 01:41:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.314 01:41:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.314 01:41:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.314 01:41:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.314 01:41:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.314 01:41:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.314 01:41:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.314 01:41:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.314 01:41:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.314 01:41:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.314 01:41:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.314 01:41:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.314 01:41:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.314 01:41:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.314 01:41:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.314 01:41:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:47.314 01:41:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.314 01:41:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.314 01:41:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.314 01:41:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.314 01:41:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.314 01:41:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.314 01:41:00 -- paths/export.sh@5 -- # export PATH 00:19:47.314 01:41:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.314 01:41:00 -- nvmf/common.sh@46 -- # : 0 00:19:47.314 01:41:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:47.314 01:41:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:47.314 01:41:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:47.314 01:41:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.314 01:41:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.314 01:41:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:47.314 01:41:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:47.314 01:41:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:47.314 01:41:00 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:47.314 01:41:00 -- target/tls.sh@71 -- # nvmftestinit 00:19:47.314 01:41:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:47.314 01:41:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.314 01:41:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:47.314 01:41:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:47.314 01:41:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:47.314 01:41:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.314 01:41:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:47.314 01:41:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.314 01:41:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:47.314 01:41:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:47.314 01:41:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:47.314 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:19:49.254 01:41:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.254 01:41:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.254 01:41:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.254 01:41:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.254 01:41:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.254 01:41:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.254 01:41:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.254 01:41:02 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.254 01:41:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.254 01:41:02 -- nvmf/common.sh@295 -- # e810=() 00:19:49.254 01:41:02 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.254 01:41:02 -- nvmf/common.sh@296 -- # x722=() 00:19:49.254 01:41:02 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.254 01:41:02 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.254 01:41:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.254 01:41:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.254 01:41:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.255 01:41:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.255 01:41:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.255 01:41:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.255 01:41:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:49.255 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:49.255 01:41:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.255 01:41:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:49.255 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:49.255 01:41:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.255 01:41:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.255 01:41:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.255 01:41:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:49.255 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:49.255 01:41:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.255 01:41:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.255 01:41:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.255 01:41:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:49.255 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:49.255 01:41:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.255 01:41:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.255 01:41:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.255 01:41:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.255 01:41:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.255 01:41:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.255 01:41:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.255 01:41:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.255 01:41:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.255 01:41:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.255 01:41:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.255 01:41:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.255 01:41:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.255 01:41:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.255 01:41:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.255 01:41:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.255 01:41:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.255 01:41:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.255 01:41:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.255 01:41:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.255 01:41:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:19:49.255 00:19:49.255 --- 10.0.0.2 ping statistics --- 00:19:49.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.255 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:19:49.255 01:41:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:19:49.255 00:19:49.255 --- 10.0.0.1 ping statistics --- 00:19:49.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.255 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:19:49.255 01:41:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.255 01:41:02 -- nvmf/common.sh@410 -- # return 0 00:19:49.255 01:41:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.255 01:41:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.255 01:41:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.255 01:41:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.255 01:41:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.255 01:41:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.255 01:41:02 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:49.255 01:41:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.255 01:41:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:49.255 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.255 01:41:02 -- nvmf/common.sh@469 -- # nvmfpid=3801832 00:19:49.255 01:41:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:49.255 01:41:02 -- nvmf/common.sh@470 -- # waitforlisten 3801832 00:19:49.255 01:41:02 -- common/autotest_common.sh@819 -- # '[' -z 3801832 ']' 00:19:49.255 01:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.255 01:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.255 01:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.255 01:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.255 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.515 [2024-07-23 01:41:02.360480] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:49.515 [2024-07-23 01:41:02.360557] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.515 [2024-07-23 01:41:02.432269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.515 [2024-07-23 01:41:02.523358] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.515 [2024-07-23 01:41:02.523550] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.515 [2024-07-23 01:41:02.523579] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.515 [2024-07-23 01:41:02.523601] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.515 [2024-07-23 01:41:02.523654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.515 01:41:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:49.515 01:41:02 -- common/autotest_common.sh@852 -- # return 0 00:19:49.515 01:41:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:49.515 01:41:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:49.515 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:19:49.773 01:41:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.773 01:41:02 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:49.773 01:41:02 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:50.031 true 00:19:50.031 01:41:02 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.031 01:41:02 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:50.291 01:41:03 -- target/tls.sh@82 -- # version=0 00:19:50.291 01:41:03 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:50.291 01:41:03 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:50.552 01:41:03 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.552 01:41:03 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:50.811 01:41:03 -- target/tls.sh@90 -- # version=13 00:19:50.811 01:41:03 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:50.811 01:41:03 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:51.068 01:41:03 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.068 01:41:03 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:51.326 01:41:04 -- target/tls.sh@98 -- # version=7 00:19:51.326 01:41:04 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:51.326 01:41:04 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.326 01:41:04 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:51.585 01:41:04 -- target/tls.sh@105 -- # ktls=false 00:19:51.585 01:41:04 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:51.585 01:41:04 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:51.585 01:41:04 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.585 01:41:04 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:51.845 01:41:04 -- target/tls.sh@113 -- # ktls=true 00:19:51.845 01:41:04 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:51.845 01:41:04 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:52.103 01:41:05 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.103 01:41:05 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:52.361 01:41:05 -- target/tls.sh@121 -- # ktls=false 00:19:52.361 01:41:05 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:52.361 01:41:05 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:52.361 01:41:05 -- target/tls.sh@49 -- # local key hash crc 00:19:52.361 01:41:05 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:52.361 01:41:05 -- target/tls.sh@51 -- # hash=01 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # tail -c8 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # head -c 4 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # crc='p$H�' 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.361 01:41:05 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.361 01:41:05 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:52.361 01:41:05 -- target/tls.sh@49 -- # local key hash crc 00:19:52.361 01:41:05 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:52.361 01:41:05 -- target/tls.sh@51 -- # hash=01 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # gzip -1 -c 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # tail -c8 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # head -c 4 00:19:52.361 01:41:05 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:52.361 01:41:05 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.361 01:41:05 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.361 01:41:05 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.361 01:41:05 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.361 01:41:05 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:52.361 01:41:05 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:52.361 01:41:05 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.361 01:41:05 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:52.361 01:41:05 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:52.620 01:41:05 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:52.879 01:41:05 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.880 01:41:05 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:52.880 01:41:05 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.140 [2024-07-23 01:41:06.199066] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.140 01:41:06 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.399 01:41:06 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.657 [2024-07-23 01:41:06.672380] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.657 [2024-07-23 01:41:06.672697] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.657 01:41:06 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.914 malloc0 00:19:53.914 01:41:06 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.174 01:41:07 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.434 01:41:07 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:54.434 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.419 Initializing NVMe Controllers 00:20:04.419 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:04.419 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:04.419 Initialization complete. Launching workers. 00:20:04.419 ======================================================== 00:20:04.419 Latency(us) 00:20:04.419 Device Information : IOPS MiB/s Average min max 00:20:04.419 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7812.18 30.52 8194.88 1281.27 8939.46 00:20:04.419 ======================================================== 00:20:04.419 Total : 7812.18 30.52 8194.88 1281.27 8939.46 00:20:04.419 00:20:04.419 01:41:17 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.419 01:41:17 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:04.419 01:41:17 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:04.419 01:41:17 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:04.419 01:41:17 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:04.419 01:41:17 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:04.419 01:41:17 -- target/tls.sh@28 -- # bdevperf_pid=3803666 00:20:04.419 01:41:17 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:04.419 01:41:17 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:04.419 01:41:17 -- target/tls.sh@31 -- # waitforlisten 3803666 /var/tmp/bdevperf.sock 00:20:04.419 01:41:17 -- common/autotest_common.sh@819 -- # '[' -z 3803666 ']' 00:20:04.419 01:41:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.419 01:41:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:04.419 01:41:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.419 01:41:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:04.419 01:41:17 -- common/autotest_common.sh@10 -- # set +x 00:20:04.678 [2024-07-23 01:41:17.552777] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:04.678 [2024-07-23 01:41:17.552851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3803666 ] 00:20:04.678 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.678 [2024-07-23 01:41:17.610081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.678 [2024-07-23 01:41:17.691803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.936 01:41:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:04.936 01:41:17 -- common/autotest_common.sh@852 -- # return 0 00:20:04.936 01:41:17 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:04.936 [2024-07-23 01:41:18.028719] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.194 TLSTESTn1 00:20:05.194 01:41:18 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.194 Running I/O for 10 seconds... 00:20:15.178 00:20:15.178 Latency(us) 00:20:15.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.178 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:15.178 Verification LBA range: start 0x0 length 0x2000 00:20:15.178 TLSTESTn1 : 10.03 2282.37 8.92 0.00 0.00 56007.14 9320.68 60196.03 00:20:15.178 =================================================================================================================== 00:20:15.178 Total : 2282.37 8.92 0.00 0.00 56007.14 9320.68 60196.03 00:20:15.178 0 00:20:15.438 01:41:28 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:15.438 01:41:28 -- target/tls.sh@45 -- # killprocess 3803666 00:20:15.438 01:41:28 -- common/autotest_common.sh@926 -- # '[' -z 3803666 ']' 00:20:15.438 01:41:28 -- common/autotest_common.sh@930 -- # kill -0 3803666 00:20:15.438 01:41:28 -- common/autotest_common.sh@931 -- # uname 00:20:15.438 01:41:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:15.438 01:41:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3803666 00:20:15.438 01:41:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:15.438 01:41:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:15.438 01:41:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3803666' 00:20:15.438 killing process with pid 3803666 00:20:15.438 01:41:28 -- common/autotest_common.sh@945 -- # kill 3803666 00:20:15.438 Received shutdown signal, test time was about 10.000000 seconds 00:20:15.438 00:20:15.438 Latency(us) 00:20:15.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.438 =================================================================================================================== 00:20:15.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.438 01:41:28 -- common/autotest_common.sh@950 -- # wait 3803666 00:20:15.438 01:41:28 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.438 01:41:28 -- common/autotest_common.sh@640 -- # local es=0 00:20:15.438 01:41:28 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.699 01:41:28 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:15.699 01:41:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.699 01:41:28 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:15.699 01:41:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:15.699 01:41:28 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:15.699 01:41:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:15.699 01:41:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:15.699 01:41:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:15.699 01:41:28 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:15.699 01:41:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.699 01:41:28 -- target/tls.sh@28 -- # bdevperf_pid=3805012 00:20:15.699 01:41:28 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.699 01:41:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:15.699 01:41:28 -- target/tls.sh@31 -- # waitforlisten 3805012 /var/tmp/bdevperf.sock 00:20:15.699 01:41:28 -- common/autotest_common.sh@819 -- # '[' -z 3805012 ']' 00:20:15.699 01:41:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.699 01:41:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:15.699 01:41:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.699 01:41:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:15.699 01:41:28 -- common/autotest_common.sh@10 -- # set +x 00:20:15.699 [2024-07-23 01:41:28.582725] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:15.699 [2024-07-23 01:41:28.582807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805012 ] 00:20:15.699 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.699 [2024-07-23 01:41:28.641062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.699 [2024-07-23 01:41:28.720331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.665 01:41:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:16.665 01:41:29 -- common/autotest_common.sh@852 -- # return 0 00:20:16.665 01:41:29 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:16.923 [2024-07-23 01:41:29.814371] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.923 [2024-07-23 01:41:29.819696] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:16.923 [2024-07-23 01:41:29.820260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d17f0 (107): Transport endpoint is not connected 00:20:16.923 [2024-07-23 01:41:29.821249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9d17f0 (9): Bad file descriptor 00:20:16.923 [2024-07-23 01:41:29.822247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:16.923 [2024-07-23 01:41:29.822267] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:16.923 [2024-07-23 01:41:29.822280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:16.923 request: 00:20:16.923 { 00:20:16.923 "name": "TLSTEST", 00:20:16.923 "trtype": "tcp", 00:20:16.923 "traddr": "10.0.0.2", 00:20:16.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:16.923 "adrfam": "ipv4", 00:20:16.923 "trsvcid": "4420", 00:20:16.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:16.923 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:16.923 "method": "bdev_nvme_attach_controller", 00:20:16.923 "req_id": 1 00:20:16.923 } 00:20:16.923 Got JSON-RPC error response 00:20:16.923 response: 00:20:16.923 { 00:20:16.924 "code": -32602, 00:20:16.924 "message": "Invalid parameters" 00:20:16.924 } 00:20:16.924 01:41:29 -- target/tls.sh@36 -- # killprocess 3805012 00:20:16.924 01:41:29 -- common/autotest_common.sh@926 -- # '[' -z 3805012 ']' 00:20:16.924 01:41:29 -- common/autotest_common.sh@930 -- # kill -0 3805012 00:20:16.924 01:41:29 -- common/autotest_common.sh@931 -- # uname 00:20:16.924 01:41:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.924 01:41:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805012 00:20:16.924 01:41:29 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:16.924 01:41:29 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:16.924 01:41:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805012' 00:20:16.924 killing process with pid 3805012 00:20:16.924 01:41:29 -- common/autotest_common.sh@945 -- # kill 3805012 00:20:16.924 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.924 00:20:16.924 Latency(us) 00:20:16.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.924 =================================================================================================================== 00:20:16.924 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:16.924 01:41:29 -- common/autotest_common.sh@950 -- # wait 3805012 00:20:17.184 01:41:30 -- target/tls.sh@37 -- # return 1 00:20:17.184 01:41:30 -- common/autotest_common.sh@643 -- # es=1 00:20:17.184 01:41:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:17.184 01:41:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:17.184 01:41:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:17.184 01:41:30 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.184 01:41:30 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.184 01:41:30 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.184 01:41:30 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.184 01:41:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.184 01:41:30 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.184 01:41:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.184 01:41:30 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:17.184 01:41:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.184 01:41:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.184 01:41:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:17.184 01:41:30 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:17.184 01:41:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.184 01:41:30 -- target/tls.sh@28 -- # bdevperf_pid=3805172 00:20:17.184 01:41:30 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.184 01:41:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.184 01:41:30 -- target/tls.sh@31 -- # waitforlisten 3805172 /var/tmp/bdevperf.sock 00:20:17.184 01:41:30 -- common/autotest_common.sh@819 -- # '[' -z 3805172 ']' 00:20:17.184 01:41:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.184 01:41:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.185 01:41:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.185 01:41:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.185 01:41:30 -- common/autotest_common.sh@10 -- # set +x 00:20:17.185 [2024-07-23 01:41:30.124960] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:17.185 [2024-07-23 01:41:30.125040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805172 ] 00:20:17.185 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.185 [2024-07-23 01:41:30.184184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.185 [2024-07-23 01:41:30.271856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.125 01:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.125 01:41:31 -- common/autotest_common.sh@852 -- # return 0 00:20:18.125 01:41:31 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.385 [2024-07-23 01:41:31.281408] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.385 [2024-07-23 01:41:31.292441] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.385 [2024-07-23 01:41:31.292477] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:18.385 [2024-07-23 01:41:31.292531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.385 [2024-07-23 01:41:31.293407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9b7f0 (107): Transport endpoint is not connected 00:20:18.385 [2024-07-23 01:41:31.294398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc9b7f0 (9): Bad file descriptor 00:20:18.385 [2024-07-23 01:41:31.295397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.385 [2024-07-23 01:41:31.295415] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.385 [2024-07-23 01:41:31.295428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.385 request: 00:20:18.385 { 00:20:18.385 "name": "TLSTEST", 00:20:18.385 "trtype": "tcp", 00:20:18.385 "traddr": "10.0.0.2", 00:20:18.385 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:18.385 "adrfam": "ipv4", 00:20:18.385 "trsvcid": "4420", 00:20:18.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.385 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:18.385 "method": "bdev_nvme_attach_controller", 00:20:18.385 "req_id": 1 00:20:18.385 } 00:20:18.385 Got JSON-RPC error response 00:20:18.385 response: 00:20:18.385 { 00:20:18.385 "code": -32602, 00:20:18.385 "message": "Invalid parameters" 00:20:18.385 } 00:20:18.385 01:41:31 -- target/tls.sh@36 -- # killprocess 3805172 00:20:18.385 01:41:31 -- common/autotest_common.sh@926 -- # '[' -z 3805172 ']' 00:20:18.385 01:41:31 -- common/autotest_common.sh@930 -- # kill -0 3805172 00:20:18.385 01:41:31 -- common/autotest_common.sh@931 -- # uname 00:20:18.385 01:41:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.385 01:41:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805172 00:20:18.385 01:41:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.385 01:41:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.385 01:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805172' 00:20:18.385 killing process with pid 3805172 00:20:18.385 01:41:31 -- common/autotest_common.sh@945 -- # kill 3805172 00:20:18.385 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.385 00:20:18.385 Latency(us) 00:20:18.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.385 =================================================================================================================== 00:20:18.385 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.385 01:41:31 -- common/autotest_common.sh@950 -- # wait 3805172 00:20:18.644 01:41:31 -- target/tls.sh@37 -- # return 1 00:20:18.644 01:41:31 -- common/autotest_common.sh@643 -- # es=1 00:20:18.644 01:41:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:18.644 01:41:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:18.644 01:41:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:18.644 01:41:31 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.644 01:41:31 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.644 01:41:31 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.644 01:41:31 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:18.644 01:41:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.644 01:41:31 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:18.644 01:41:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.644 01:41:31 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.644 01:41:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.644 01:41:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:18.644 01:41:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.644 01:41:31 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:18.644 01:41:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.644 01:41:31 -- target/tls.sh@28 -- # bdevperf_pid=3805350 00:20:18.644 01:41:31 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.644 01:41:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.644 01:41:31 -- target/tls.sh@31 -- # waitforlisten 3805350 /var/tmp/bdevperf.sock 00:20:18.644 01:41:31 -- common/autotest_common.sh@819 -- # '[' -z 3805350 ']' 00:20:18.644 01:41:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.644 01:41:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.644 01:41:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.644 01:41:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.644 01:41:31 -- common/autotest_common.sh@10 -- # set +x 00:20:18.644 [2024-07-23 01:41:31.604024] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:18.645 [2024-07-23 01:41:31.604113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805350 ] 00:20:18.645 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.645 [2024-07-23 01:41:31.665513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.904 [2024-07-23 01:41:31.748460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.469 01:41:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.469 01:41:32 -- common/autotest_common.sh@852 -- # return 0 00:20:19.469 01:41:32 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.727 [2024-07-23 01:41:32.774059] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.727 [2024-07-23 01:41:32.786342] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:19.727 [2024-07-23 01:41:32.786376] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:19.727 [2024-07-23 01:41:32.786430] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.727 [2024-07-23 01:41:32.787024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d47f0 (107): Transport endpoint is not connected 00:20:19.727 [2024-07-23 01:41:32.788014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14d47f0 (9): Bad file descriptor 00:20:19.727 [2024-07-23 01:41:32.789012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:19.727 [2024-07-23 01:41:32.789031] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.727 [2024-07-23 01:41:32.789059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:19.727 request: 00:20:19.727 { 00:20:19.727 "name": "TLSTEST", 00:20:19.727 "trtype": "tcp", 00:20:19.727 "traddr": "10.0.0.2", 00:20:19.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.727 "adrfam": "ipv4", 00:20:19.727 "trsvcid": "4420", 00:20:19.727 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:19.727 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:19.727 "method": "bdev_nvme_attach_controller", 00:20:19.727 "req_id": 1 00:20:19.727 } 00:20:19.727 Got JSON-RPC error response 00:20:19.727 response: 00:20:19.727 { 00:20:19.727 "code": -32602, 00:20:19.727 "message": "Invalid parameters" 00:20:19.727 } 00:20:19.727 01:41:32 -- target/tls.sh@36 -- # killprocess 3805350 00:20:19.727 01:41:32 -- common/autotest_common.sh@926 -- # '[' -z 3805350 ']' 00:20:19.727 01:41:32 -- common/autotest_common.sh@930 -- # kill -0 3805350 00:20:19.727 01:41:32 -- common/autotest_common.sh@931 -- # uname 00:20:19.727 01:41:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.728 01:41:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805350 00:20:19.986 01:41:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:19.986 01:41:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:19.986 01:41:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805350' 00:20:19.986 killing process with pid 3805350 00:20:19.986 01:41:32 -- common/autotest_common.sh@945 -- # kill 3805350 00:20:19.986 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.986 00:20:19.986 Latency(us) 00:20:19.986 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.986 =================================================================================================================== 00:20:19.986 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.986 01:41:32 -- common/autotest_common.sh@950 -- # wait 3805350 00:20:19.986 01:41:33 -- target/tls.sh@37 -- # return 1 00:20:19.986 01:41:33 -- common/autotest_common.sh@643 -- # es=1 00:20:19.986 01:41:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.986 01:41:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.986 01:41:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.986 01:41:33 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:19.986 01:41:33 -- common/autotest_common.sh@640 -- # local es=0 00:20:19.986 01:41:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:19.986 01:41:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:19.986 01:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.986 01:41:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:19.986 01:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.986 01:41:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:19.986 01:41:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.986 01:41:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.986 01:41:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.986 01:41:33 -- target/tls.sh@23 -- # psk= 00:20:19.986 01:41:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.986 01:41:33 -- target/tls.sh@28 -- # bdevperf_pid=3805592 00:20:19.986 01:41:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.986 01:41:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.986 01:41:33 -- target/tls.sh@31 -- # waitforlisten 3805592 /var/tmp/bdevperf.sock 00:20:19.986 01:41:33 -- common/autotest_common.sh@819 -- # '[' -z 3805592 ']' 00:20:19.986 01:41:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.986 01:41:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.986 01:41:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.986 01:41:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.986 01:41:33 -- common/autotest_common.sh@10 -- # set +x 00:20:20.246 [2024-07-23 01:41:33.097924] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:20.246 [2024-07-23 01:41:33.097999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805592 ] 00:20:20.246 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.246 [2024-07-23 01:41:33.155206] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.246 [2024-07-23 01:41:33.234506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.182 01:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.182 01:41:34 -- common/autotest_common.sh@852 -- # return 0 00:20:21.182 01:41:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:21.182 [2024-07-23 01:41:34.240125] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.182 [2024-07-23 01:41:34.242029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x176fec0 (9): Bad file descriptor 00:20:21.182 [2024-07-23 01:41:34.243025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:21.182 [2024-07-23 01:41:34.243044] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.182 [2024-07-23 01:41:34.243078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:21.182 request: 00:20:21.182 { 00:20:21.182 "name": "TLSTEST", 00:20:21.182 "trtype": "tcp", 00:20:21.182 "traddr": "10.0.0.2", 00:20:21.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.182 "adrfam": "ipv4", 00:20:21.182 "trsvcid": "4420", 00:20:21.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.182 "method": "bdev_nvme_attach_controller", 00:20:21.182 "req_id": 1 00:20:21.182 } 00:20:21.182 Got JSON-RPC error response 00:20:21.182 response: 00:20:21.182 { 00:20:21.182 "code": -32602, 00:20:21.182 "message": "Invalid parameters" 00:20:21.182 } 00:20:21.182 01:41:34 -- target/tls.sh@36 -- # killprocess 3805592 00:20:21.182 01:41:34 -- common/autotest_common.sh@926 -- # '[' -z 3805592 ']' 00:20:21.182 01:41:34 -- common/autotest_common.sh@930 -- # kill -0 3805592 00:20:21.182 01:41:34 -- common/autotest_common.sh@931 -- # uname 00:20:21.182 01:41:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:21.182 01:41:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805592 00:20:21.441 01:41:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:21.441 01:41:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:21.441 01:41:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805592' 00:20:21.441 killing process with pid 3805592 00:20:21.441 01:41:34 -- common/autotest_common.sh@945 -- # kill 3805592 00:20:21.441 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.441 00:20:21.441 Latency(us) 00:20:21.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.441 =================================================================================================================== 00:20:21.441 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.441 01:41:34 -- common/autotest_common.sh@950 -- # wait 3805592 00:20:21.441 01:41:34 -- target/tls.sh@37 -- # return 1 00:20:21.441 01:41:34 -- common/autotest_common.sh@643 -- # es=1 00:20:21.441 01:41:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:21.441 01:41:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:21.441 01:41:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:21.441 01:41:34 -- target/tls.sh@167 -- # killprocess 3801832 00:20:21.441 01:41:34 -- common/autotest_common.sh@926 -- # '[' -z 3801832 ']' 00:20:21.441 01:41:34 -- common/autotest_common.sh@930 -- # kill -0 3801832 00:20:21.441 01:41:34 -- common/autotest_common.sh@931 -- # uname 00:20:21.441 01:41:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:21.441 01:41:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3801832 00:20:21.441 01:41:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:21.441 01:41:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:21.441 01:41:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3801832' 00:20:21.441 killing process with pid 3801832 00:20:21.441 01:41:34 -- common/autotest_common.sh@945 -- # kill 3801832 00:20:21.441 01:41:34 -- common/autotest_common.sh@950 -- # wait 3801832 00:20:21.701 01:41:34 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:21.701 01:41:34 -- target/tls.sh@49 -- # local key hash crc 00:20:21.701 01:41:34 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:21.701 01:41:34 -- target/tls.sh@51 -- # hash=02 00:20:21.701 01:41:34 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:21.701 01:41:34 -- target/tls.sh@52 -- # gzip -1 -c 00:20:21.701 01:41:34 -- target/tls.sh@52 -- # tail -c8 00:20:21.701 01:41:34 -- target/tls.sh@52 -- # head -c 4 00:20:21.701 01:41:34 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:21.701 01:41:34 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:21.701 01:41:34 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:21.701 01:41:34 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:21.701 01:41:34 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:21.701 01:41:34 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.701 01:41:34 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:21.701 01:41:34 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.701 01:41:34 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:21.701 01:41:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:21.701 01:41:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:21.701 01:41:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.701 01:41:34 -- nvmf/common.sh@469 -- # nvmfpid=3805767 00:20:21.701 01:41:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:21.701 01:41:34 -- nvmf/common.sh@470 -- # waitforlisten 3805767 00:20:21.701 01:41:34 -- common/autotest_common.sh@819 -- # '[' -z 3805767 ']' 00:20:21.701 01:41:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.701 01:41:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:21.701 01:41:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.701 01:41:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:21.701 01:41:34 -- common/autotest_common.sh@10 -- # set +x 00:20:21.961 [2024-07-23 01:41:34.829818] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:21.961 [2024-07-23 01:41:34.829912] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.961 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.961 [2024-07-23 01:41:34.894569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.961 [2024-07-23 01:41:34.979381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:21.961 [2024-07-23 01:41:34.979535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.961 [2024-07-23 01:41:34.979550] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.961 [2024-07-23 01:41:34.979561] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.961 [2024-07-23 01:41:34.979587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.897 01:41:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.897 01:41:35 -- common/autotest_common.sh@852 -- # return 0 00:20:22.897 01:41:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:22.897 01:41:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:22.897 01:41:35 -- common/autotest_common.sh@10 -- # set +x 00:20:22.897 01:41:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:22.897 01:41:35 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.897 01:41:35 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.897 01:41:35 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:22.897 [2024-07-23 01:41:35.991049] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.157 01:41:36 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.157 01:41:36 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.416 [2024-07-23 01:41:36.472334] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.416 [2024-07-23 01:41:36.472552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.416 01:41:36 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:23.674 malloc0 00:20:23.674 01:41:36 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:23.932 01:41:37 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.191 01:41:37 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.191 01:41:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:24.191 01:41:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:24.191 01:41:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:24.191 01:41:37 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:24.191 01:41:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:24.191 01:41:37 -- target/tls.sh@28 -- # bdevperf_pid=3806159 00:20:24.191 01:41:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:24.191 01:41:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:24.191 01:41:37 -- target/tls.sh@31 -- # waitforlisten 3806159 /var/tmp/bdevperf.sock 00:20:24.191 01:41:37 -- common/autotest_common.sh@819 -- # '[' -z 3806159 ']' 00:20:24.191 01:41:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:24.191 01:41:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.191 01:41:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:24.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:24.191 01:41:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.191 01:41:37 -- common/autotest_common.sh@10 -- # set +x 00:20:24.191 [2024-07-23 01:41:37.288077] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:24.191 [2024-07-23 01:41:37.288174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806159 ] 00:20:24.450 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.450 [2024-07-23 01:41:37.348951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.450 [2024-07-23 01:41:37.440975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.387 01:41:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:25.387 01:41:38 -- common/autotest_common.sh@852 -- # return 0 00:20:25.388 01:41:38 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.647 [2024-07-23 01:41:38.509045] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.647 TLSTESTn1 00:20:25.647 01:41:38 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:25.647 Running I/O for 10 seconds... 00:20:37.855 00:20:37.855 Latency(us) 00:20:37.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.855 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.855 Verification LBA range: start 0x0 length 0x2000 00:20:37.855 TLSTESTn1 : 10.03 2292.18 8.95 0.00 0.00 55760.35 4878.79 58642.58 00:20:37.855 =================================================================================================================== 00:20:37.855 Total : 2292.18 8.95 0.00 0.00 55760.35 4878.79 58642.58 00:20:37.855 0 00:20:37.855 01:41:48 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.855 01:41:48 -- target/tls.sh@45 -- # killprocess 3806159 00:20:37.855 01:41:48 -- common/autotest_common.sh@926 -- # '[' -z 3806159 ']' 00:20:37.855 01:41:48 -- common/autotest_common.sh@930 -- # kill -0 3806159 00:20:37.855 01:41:48 -- common/autotest_common.sh@931 -- # uname 00:20:37.855 01:41:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.855 01:41:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3806159 00:20:37.855 01:41:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.855 01:41:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.855 01:41:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3806159' 00:20:37.855 killing process with pid 3806159 00:20:37.855 01:41:48 -- common/autotest_common.sh@945 -- # kill 3806159 00:20:37.855 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.855 00:20:37.855 Latency(us) 00:20:37.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.855 =================================================================================================================== 00:20:37.855 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.855 01:41:48 -- common/autotest_common.sh@950 -- # wait 3806159 00:20:37.855 01:41:49 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.855 01:41:49 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.855 01:41:49 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.855 01:41:49 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.855 01:41:49 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:37.855 01:41:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.855 01:41:49 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:37.855 01:41:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.855 01:41:49 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.855 01:41:49 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.855 01:41:49 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.855 01:41:49 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.855 01:41:49 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:37.855 01:41:49 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.855 01:41:49 -- target/tls.sh@28 -- # bdevperf_pid=3807552 00:20:37.855 01:41:49 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.855 01:41:49 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.855 01:41:49 -- target/tls.sh@31 -- # waitforlisten 3807552 /var/tmp/bdevperf.sock 00:20:37.855 01:41:49 -- common/autotest_common.sh@819 -- # '[' -z 3807552 ']' 00:20:37.855 01:41:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.855 01:41:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.855 01:41:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.855 01:41:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.855 01:41:49 -- common/autotest_common.sh@10 -- # set +x 00:20:37.855 [2024-07-23 01:41:49.056164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:37.855 [2024-07-23 01:41:49.056244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807552 ] 00:20:37.855 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.855 [2024-07-23 01:41:49.112970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.855 [2024-07-23 01:41:49.191371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.855 01:41:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:37.855 01:41:49 -- common/autotest_common.sh@852 -- # return 0 00:20:37.855 01:41:49 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.855 [2024-07-23 01:41:50.203558] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.855 [2024-07-23 01:41:50.203630] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:37.855 request: 00:20:37.855 { 00:20:37.855 "name": "TLSTEST", 00:20:37.855 "trtype": "tcp", 00:20:37.855 "traddr": "10.0.0.2", 00:20:37.855 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.855 "adrfam": "ipv4", 00:20:37.855 "trsvcid": "4420", 00:20:37.855 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.855 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:37.855 "method": "bdev_nvme_attach_controller", 00:20:37.855 "req_id": 1 00:20:37.855 } 00:20:37.855 Got JSON-RPC error response 00:20:37.855 response: 00:20:37.855 { 00:20:37.855 "code": -22, 00:20:37.855 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:37.855 } 00:20:37.855 01:41:50 -- target/tls.sh@36 -- # killprocess 3807552 00:20:37.855 01:41:50 -- common/autotest_common.sh@926 -- # '[' -z 3807552 ']' 00:20:37.855 01:41:50 -- common/autotest_common.sh@930 -- # kill -0 3807552 00:20:37.855 01:41:50 -- common/autotest_common.sh@931 -- # uname 00:20:37.855 01:41:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.855 01:41:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3807552 00:20:37.855 01:41:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.855 01:41:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.855 01:41:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3807552' 00:20:37.855 killing process with pid 3807552 00:20:37.855 01:41:50 -- common/autotest_common.sh@945 -- # kill 3807552 00:20:37.855 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.855 00:20:37.855 Latency(us) 00:20:37.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.855 =================================================================================================================== 00:20:37.855 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:37.855 01:41:50 -- common/autotest_common.sh@950 -- # wait 3807552 00:20:37.855 01:41:50 -- target/tls.sh@37 -- # return 1 00:20:37.855 01:41:50 -- common/autotest_common.sh@643 -- # es=1 00:20:37.855 01:41:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:37.855 01:41:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:37.855 01:41:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:37.855 01:41:50 -- target/tls.sh@183 -- # killprocess 3805767 00:20:37.855 01:41:50 -- common/autotest_common.sh@926 -- # '[' -z 3805767 ']' 00:20:37.856 01:41:50 -- common/autotest_common.sh@930 -- # kill -0 3805767 00:20:37.856 01:41:50 -- common/autotest_common.sh@931 -- # uname 00:20:37.856 01:41:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.856 01:41:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3805767 00:20:37.856 01:41:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:37.856 01:41:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:37.856 01:41:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3805767' 00:20:37.856 killing process with pid 3805767 00:20:37.856 01:41:50 -- common/autotest_common.sh@945 -- # kill 3805767 00:20:37.856 01:41:50 -- common/autotest_common.sh@950 -- # wait 3805767 00:20:37.856 01:41:50 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:37.856 01:41:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:37.856 01:41:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:37.856 01:41:50 -- common/autotest_common.sh@10 -- # set +x 00:20:37.856 01:41:50 -- nvmf/common.sh@469 -- # nvmfpid=3807718 00:20:37.856 01:41:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:37.856 01:41:50 -- nvmf/common.sh@470 -- # waitforlisten 3807718 00:20:37.856 01:41:50 -- common/autotest_common.sh@819 -- # '[' -z 3807718 ']' 00:20:37.856 01:41:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.856 01:41:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.856 01:41:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.856 01:41:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.856 01:41:50 -- common/autotest_common.sh@10 -- # set +x 00:20:37.856 [2024-07-23 01:41:50.774867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:37.856 [2024-07-23 01:41:50.774962] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:37.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.856 [2024-07-23 01:41:50.842063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.856 [2024-07-23 01:41:50.927444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:37.856 [2024-07-23 01:41:50.927643] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.856 [2024-07-23 01:41:50.927662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.856 [2024-07-23 01:41:50.927674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.856 [2024-07-23 01:41:50.927704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.791 01:41:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.791 01:41:51 -- common/autotest_common.sh@852 -- # return 0 00:20:38.791 01:41:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:38.791 01:41:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:38.791 01:41:51 -- common/autotest_common.sh@10 -- # set +x 00:20:38.791 01:41:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.791 01:41:51 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.791 01:41:51 -- common/autotest_common.sh@640 -- # local es=0 00:20:38.791 01:41:51 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.791 01:41:51 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:38.791 01:41:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:38.791 01:41:51 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:38.791 01:41:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:38.791 01:41:51 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.791 01:41:51 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.791 01:41:51 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.049 [2024-07-23 01:41:51.999396] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.049 01:41:52 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.306 01:41:52 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.564 [2024-07-23 01:41:52.456596] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.564 [2024-07-23 01:41:52.456816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.564 01:41:52 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:39.827 malloc0 00:20:39.827 01:41:52 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.123 01:41:52 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.123 [2024-07-23 01:41:53.190110] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:40.123 [2024-07-23 01:41:53.190161] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:40.123 [2024-07-23 01:41:53.190186] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:40.123 request: 00:20:40.123 { 00:20:40.123 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.123 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.123 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:40.123 "method": "nvmf_subsystem_add_host", 00:20:40.123 "req_id": 1 00:20:40.123 } 00:20:40.123 Got JSON-RPC error response 00:20:40.123 response: 00:20:40.123 { 00:20:40.123 "code": -32603, 00:20:40.123 "message": "Internal error" 00:20:40.123 } 00:20:40.386 01:41:53 -- common/autotest_common.sh@643 -- # es=1 00:20:40.386 01:41:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:40.386 01:41:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:40.386 01:41:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:40.386 01:41:53 -- target/tls.sh@189 -- # killprocess 3807718 00:20:40.386 01:41:53 -- common/autotest_common.sh@926 -- # '[' -z 3807718 ']' 00:20:40.386 01:41:53 -- common/autotest_common.sh@930 -- # kill -0 3807718 00:20:40.386 01:41:53 -- common/autotest_common.sh@931 -- # uname 00:20:40.386 01:41:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:40.386 01:41:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3807718 00:20:40.386 01:41:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:40.386 01:41:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:40.386 01:41:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3807718' 00:20:40.386 killing process with pid 3807718 00:20:40.386 01:41:53 -- common/autotest_common.sh@945 -- # kill 3807718 00:20:40.386 01:41:53 -- common/autotest_common.sh@950 -- # wait 3807718 00:20:40.646 01:41:53 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.646 01:41:53 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:40.646 01:41:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.646 01:41:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:40.646 01:41:53 -- common/autotest_common.sh@10 -- # set +x 00:20:40.646 01:41:53 -- nvmf/common.sh@469 -- # nvmfpid=3808143 00:20:40.646 01:41:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.646 01:41:53 -- nvmf/common.sh@470 -- # waitforlisten 3808143 00:20:40.646 01:41:53 -- common/autotest_common.sh@819 -- # '[' -z 3808143 ']' 00:20:40.646 01:41:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.646 01:41:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:40.646 01:41:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.646 01:41:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:40.646 01:41:53 -- common/autotest_common.sh@10 -- # set +x 00:20:40.646 [2024-07-23 01:41:53.548570] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:40.646 [2024-07-23 01:41:53.548654] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.646 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.646 [2024-07-23 01:41:53.615688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.646 [2024-07-23 01:41:53.703721] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.646 [2024-07-23 01:41:53.703880] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.646 [2024-07-23 01:41:53.703907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.646 [2024-07-23 01:41:53.703922] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.646 [2024-07-23 01:41:53.703965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.581 01:41:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:41.581 01:41:54 -- common/autotest_common.sh@852 -- # return 0 00:20:41.581 01:41:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:41.581 01:41:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:41.581 01:41:54 -- common/autotest_common.sh@10 -- # set +x 00:20:41.581 01:41:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.581 01:41:54 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.581 01:41:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.581 01:41:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.839 [2024-07-23 01:41:54.754471] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.839 01:41:54 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:42.097 01:41:55 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.356 [2024-07-23 01:41:55.219736] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.356 [2024-07-23 01:41:55.219961] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.356 01:41:55 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.614 malloc0 00:20:42.614 01:41:55 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.871 01:41:55 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.129 01:41:56 -- target/tls.sh@197 -- # bdevperf_pid=3808444 00:20:43.129 01:41:56 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.129 01:41:56 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.129 01:41:56 -- target/tls.sh@200 -- # waitforlisten 3808444 /var/tmp/bdevperf.sock 00:20:43.129 01:41:56 -- common/autotest_common.sh@819 -- # '[' -z 3808444 ']' 00:20:43.129 01:41:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.129 01:41:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:43.129 01:41:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.129 01:41:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:43.129 01:41:56 -- common/autotest_common.sh@10 -- # set +x 00:20:43.129 [2024-07-23 01:41:56.043034] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:43.129 [2024-07-23 01:41:56.043103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808444 ] 00:20:43.129 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.129 [2024-07-23 01:41:56.099656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.129 [2024-07-23 01:41:56.179971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.063 01:41:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.063 01:41:56 -- common/autotest_common.sh@852 -- # return 0 00:20:44.063 01:41:56 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.320 [2024-07-23 01:41:57.176796] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.320 TLSTESTn1 00:20:44.320 01:41:57 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:44.579 01:41:57 -- target/tls.sh@205 -- # tgtconf='{ 00:20:44.579 "subsystems": [ 00:20:44.579 { 00:20:44.579 "subsystem": "iobuf", 00:20:44.579 "config": [ 00:20:44.579 { 00:20:44.579 "method": "iobuf_set_options", 00:20:44.579 "params": { 00:20:44.579 "small_pool_count": 8192, 00:20:44.579 "large_pool_count": 1024, 00:20:44.579 "small_bufsize": 8192, 00:20:44.579 "large_bufsize": 135168 00:20:44.579 } 00:20:44.579 } 00:20:44.579 ] 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "subsystem": "sock", 00:20:44.579 "config": [ 00:20:44.579 { 00:20:44.579 "method": "sock_impl_set_options", 00:20:44.579 "params": { 00:20:44.579 "impl_name": "posix", 00:20:44.579 "recv_buf_size": 2097152, 00:20:44.579 "send_buf_size": 2097152, 00:20:44.579 "enable_recv_pipe": true, 00:20:44.579 "enable_quickack": false, 00:20:44.579 "enable_placement_id": 0, 00:20:44.579 "enable_zerocopy_send_server": true, 00:20:44.579 "enable_zerocopy_send_client": false, 00:20:44.579 "zerocopy_threshold": 0, 00:20:44.579 "tls_version": 0, 00:20:44.579 "enable_ktls": false 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "sock_impl_set_options", 00:20:44.579 "params": { 00:20:44.579 "impl_name": "ssl", 00:20:44.579 "recv_buf_size": 4096, 00:20:44.579 "send_buf_size": 4096, 00:20:44.579 "enable_recv_pipe": true, 00:20:44.579 "enable_quickack": false, 00:20:44.579 "enable_placement_id": 0, 00:20:44.579 "enable_zerocopy_send_server": true, 00:20:44.579 "enable_zerocopy_send_client": false, 00:20:44.579 "zerocopy_threshold": 0, 00:20:44.579 "tls_version": 0, 00:20:44.579 "enable_ktls": false 00:20:44.579 } 00:20:44.579 } 00:20:44.579 ] 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "subsystem": "vmd", 00:20:44.579 "config": [] 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "subsystem": "accel", 00:20:44.579 "config": [ 00:20:44.579 { 00:20:44.579 "method": "accel_set_options", 00:20:44.579 "params": { 00:20:44.579 "small_cache_size": 128, 00:20:44.579 "large_cache_size": 16, 00:20:44.579 "task_count": 2048, 00:20:44.579 "sequence_count": 2048, 00:20:44.579 "buf_count": 2048 00:20:44.579 } 00:20:44.579 } 00:20:44.579 ] 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "subsystem": "bdev", 00:20:44.579 "config": [ 00:20:44.579 { 00:20:44.579 "method": "bdev_set_options", 00:20:44.579 "params": { 00:20:44.579 "bdev_io_pool_size": 65535, 00:20:44.579 "bdev_io_cache_size": 256, 00:20:44.579 "bdev_auto_examine": true, 00:20:44.579 "iobuf_small_cache_size": 128, 00:20:44.579 "iobuf_large_cache_size": 16 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "bdev_raid_set_options", 00:20:44.579 "params": { 00:20:44.579 "process_window_size_kb": 1024 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "bdev_iscsi_set_options", 00:20:44.579 "params": { 00:20:44.579 "timeout_sec": 30 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "bdev_nvme_set_options", 00:20:44.579 "params": { 00:20:44.579 "action_on_timeout": "none", 00:20:44.579 "timeout_us": 0, 00:20:44.579 "timeout_admin_us": 0, 00:20:44.579 "keep_alive_timeout_ms": 10000, 00:20:44.579 "transport_retry_count": 4, 00:20:44.579 "arbitration_burst": 0, 00:20:44.579 "low_priority_weight": 0, 00:20:44.579 "medium_priority_weight": 0, 00:20:44.579 "high_priority_weight": 0, 00:20:44.579 "nvme_adminq_poll_period_us": 10000, 00:20:44.579 "nvme_ioq_poll_period_us": 0, 00:20:44.579 "io_queue_requests": 0, 00:20:44.579 "delay_cmd_submit": true, 00:20:44.579 "bdev_retry_count": 3, 00:20:44.579 "transport_ack_timeout": 0, 00:20:44.579 "ctrlr_loss_timeout_sec": 0, 00:20:44.579 "reconnect_delay_sec": 0, 00:20:44.579 "fast_io_fail_timeout_sec": 0, 00:20:44.579 "generate_uuids": false, 00:20:44.579 "transport_tos": 0, 00:20:44.579 "io_path_stat": false, 00:20:44.579 "allow_accel_sequence": false 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "bdev_nvme_set_hotplug", 00:20:44.579 "params": { 00:20:44.579 "period_us": 100000, 00:20:44.579 "enable": false 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.579 "method": "bdev_malloc_create", 00:20:44.579 "params": { 00:20:44.579 "name": "malloc0", 00:20:44.579 "num_blocks": 8192, 00:20:44.579 "block_size": 4096, 00:20:44.579 "physical_block_size": 4096, 00:20:44.579 "uuid": "f5ac5f51-814d-4e55-b4cd-419580a989d3", 00:20:44.579 "optimal_io_boundary": 0 00:20:44.579 } 00:20:44.579 }, 00:20:44.579 { 00:20:44.580 "method": "bdev_wait_for_examine" 00:20:44.580 } 00:20:44.580 ] 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "subsystem": "nbd", 00:20:44.580 "config": [] 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "subsystem": "scheduler", 00:20:44.580 "config": [ 00:20:44.580 { 00:20:44.580 "method": "framework_set_scheduler", 00:20:44.580 "params": { 00:20:44.580 "name": "static" 00:20:44.580 } 00:20:44.580 } 00:20:44.580 ] 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "subsystem": "nvmf", 00:20:44.580 "config": [ 00:20:44.580 { 00:20:44.580 "method": "nvmf_set_config", 00:20:44.580 "params": { 00:20:44.580 "discovery_filter": "match_any", 00:20:44.580 "admin_cmd_passthru": { 00:20:44.580 "identify_ctrlr": false 00:20:44.580 } 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_set_max_subsystems", 00:20:44.580 "params": { 00:20:44.580 "max_subsystems": 1024 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_set_crdt", 00:20:44.580 "params": { 00:20:44.580 "crdt1": 0, 00:20:44.580 "crdt2": 0, 00:20:44.580 "crdt3": 0 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_create_transport", 00:20:44.580 "params": { 00:20:44.580 "trtype": "TCP", 00:20:44.580 "max_queue_depth": 128, 00:20:44.580 "max_io_qpairs_per_ctrlr": 127, 00:20:44.580 "in_capsule_data_size": 4096, 00:20:44.580 "max_io_size": 131072, 00:20:44.580 "io_unit_size": 131072, 00:20:44.580 "max_aq_depth": 128, 00:20:44.580 "num_shared_buffers": 511, 00:20:44.580 "buf_cache_size": 4294967295, 00:20:44.580 "dif_insert_or_strip": false, 00:20:44.580 "zcopy": false, 00:20:44.580 "c2h_success": false, 00:20:44.580 "sock_priority": 0, 00:20:44.580 "abort_timeout_sec": 1 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_create_subsystem", 00:20:44.580 "params": { 00:20:44.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.580 "allow_any_host": false, 00:20:44.580 "serial_number": "SPDK00000000000001", 00:20:44.580 "model_number": "SPDK bdev Controller", 00:20:44.580 "max_namespaces": 10, 00:20:44.580 "min_cntlid": 1, 00:20:44.580 "max_cntlid": 65519, 00:20:44.580 "ana_reporting": false 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_subsystem_add_host", 00:20:44.580 "params": { 00:20:44.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.580 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.580 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_subsystem_add_ns", 00:20:44.580 "params": { 00:20:44.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.580 "namespace": { 00:20:44.580 "nsid": 1, 00:20:44.580 "bdev_name": "malloc0", 00:20:44.580 "nguid": "F5AC5F51814D4E55B4CD419580A989D3", 00:20:44.580 "uuid": "f5ac5f51-814d-4e55-b4cd-419580a989d3" 00:20:44.580 } 00:20:44.580 } 00:20:44.580 }, 00:20:44.580 { 00:20:44.580 "method": "nvmf_subsystem_add_listener", 00:20:44.580 "params": { 00:20:44.580 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.580 "listen_address": { 00:20:44.580 "trtype": "TCP", 00:20:44.580 "adrfam": "IPv4", 00:20:44.580 "traddr": "10.0.0.2", 00:20:44.580 "trsvcid": "4420" 00:20:44.580 }, 00:20:44.580 "secure_channel": true 00:20:44.580 } 00:20:44.580 } 00:20:44.580 ] 00:20:44.580 } 00:20:44.580 ] 00:20:44.580 }' 00:20:44.580 01:41:57 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:44.840 01:41:57 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:44.840 "subsystems": [ 00:20:44.840 { 00:20:44.840 "subsystem": "iobuf", 00:20:44.840 "config": [ 00:20:44.840 { 00:20:44.840 "method": "iobuf_set_options", 00:20:44.840 "params": { 00:20:44.840 "small_pool_count": 8192, 00:20:44.840 "large_pool_count": 1024, 00:20:44.840 "small_bufsize": 8192, 00:20:44.840 "large_bufsize": 135168 00:20:44.840 } 00:20:44.840 } 00:20:44.840 ] 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "subsystem": "sock", 00:20:44.840 "config": [ 00:20:44.840 { 00:20:44.840 "method": "sock_impl_set_options", 00:20:44.840 "params": { 00:20:44.840 "impl_name": "posix", 00:20:44.840 "recv_buf_size": 2097152, 00:20:44.840 "send_buf_size": 2097152, 00:20:44.840 "enable_recv_pipe": true, 00:20:44.840 "enable_quickack": false, 00:20:44.840 "enable_placement_id": 0, 00:20:44.840 "enable_zerocopy_send_server": true, 00:20:44.840 "enable_zerocopy_send_client": false, 00:20:44.840 "zerocopy_threshold": 0, 00:20:44.840 "tls_version": 0, 00:20:44.840 "enable_ktls": false 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "sock_impl_set_options", 00:20:44.840 "params": { 00:20:44.840 "impl_name": "ssl", 00:20:44.840 "recv_buf_size": 4096, 00:20:44.840 "send_buf_size": 4096, 00:20:44.840 "enable_recv_pipe": true, 00:20:44.840 "enable_quickack": false, 00:20:44.840 "enable_placement_id": 0, 00:20:44.840 "enable_zerocopy_send_server": true, 00:20:44.840 "enable_zerocopy_send_client": false, 00:20:44.840 "zerocopy_threshold": 0, 00:20:44.840 "tls_version": 0, 00:20:44.840 "enable_ktls": false 00:20:44.840 } 00:20:44.840 } 00:20:44.840 ] 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "subsystem": "vmd", 00:20:44.840 "config": [] 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "subsystem": "accel", 00:20:44.840 "config": [ 00:20:44.840 { 00:20:44.840 "method": "accel_set_options", 00:20:44.840 "params": { 00:20:44.840 "small_cache_size": 128, 00:20:44.840 "large_cache_size": 16, 00:20:44.840 "task_count": 2048, 00:20:44.840 "sequence_count": 2048, 00:20:44.840 "buf_count": 2048 00:20:44.840 } 00:20:44.840 } 00:20:44.840 ] 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "subsystem": "bdev", 00:20:44.840 "config": [ 00:20:44.840 { 00:20:44.840 "method": "bdev_set_options", 00:20:44.840 "params": { 00:20:44.840 "bdev_io_pool_size": 65535, 00:20:44.840 "bdev_io_cache_size": 256, 00:20:44.840 "bdev_auto_examine": true, 00:20:44.840 "iobuf_small_cache_size": 128, 00:20:44.840 "iobuf_large_cache_size": 16 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "bdev_raid_set_options", 00:20:44.840 "params": { 00:20:44.840 "process_window_size_kb": 1024 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "bdev_iscsi_set_options", 00:20:44.840 "params": { 00:20:44.840 "timeout_sec": 30 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "bdev_nvme_set_options", 00:20:44.840 "params": { 00:20:44.840 "action_on_timeout": "none", 00:20:44.840 "timeout_us": 0, 00:20:44.840 "timeout_admin_us": 0, 00:20:44.840 "keep_alive_timeout_ms": 10000, 00:20:44.840 "transport_retry_count": 4, 00:20:44.840 "arbitration_burst": 0, 00:20:44.840 "low_priority_weight": 0, 00:20:44.840 "medium_priority_weight": 0, 00:20:44.840 "high_priority_weight": 0, 00:20:44.840 "nvme_adminq_poll_period_us": 10000, 00:20:44.840 "nvme_ioq_poll_period_us": 0, 00:20:44.840 "io_queue_requests": 512, 00:20:44.840 "delay_cmd_submit": true, 00:20:44.840 "bdev_retry_count": 3, 00:20:44.840 "transport_ack_timeout": 0, 00:20:44.840 "ctrlr_loss_timeout_sec": 0, 00:20:44.840 "reconnect_delay_sec": 0, 00:20:44.840 "fast_io_fail_timeout_sec": 0, 00:20:44.840 "generate_uuids": false, 00:20:44.840 "transport_tos": 0, 00:20:44.840 "io_path_stat": false, 00:20:44.840 "allow_accel_sequence": false 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "bdev_nvme_attach_controller", 00:20:44.840 "params": { 00:20:44.840 "name": "TLSTEST", 00:20:44.840 "trtype": "TCP", 00:20:44.840 "adrfam": "IPv4", 00:20:44.840 "traddr": "10.0.0.2", 00:20:44.840 "trsvcid": "4420", 00:20:44.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.840 "prchk_reftag": false, 00:20:44.840 "prchk_guard": false, 00:20:44.840 "ctrlr_loss_timeout_sec": 0, 00:20:44.840 "reconnect_delay_sec": 0, 00:20:44.840 "fast_io_fail_timeout_sec": 0, 00:20:44.840 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:44.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.840 "hdgst": false, 00:20:44.840 "ddgst": false 00:20:44.840 } 00:20:44.840 }, 00:20:44.840 { 00:20:44.840 "method": "bdev_nvme_set_hotplug", 00:20:44.840 "params": { 00:20:44.840 "period_us": 100000, 00:20:44.840 "enable": false 00:20:44.840 } 00:20:44.841 }, 00:20:44.841 { 00:20:44.841 "method": "bdev_wait_for_examine" 00:20:44.841 } 00:20:44.841 ] 00:20:44.841 }, 00:20:44.841 { 00:20:44.841 "subsystem": "nbd", 00:20:44.841 "config": [] 00:20:44.841 } 00:20:44.841 ] 00:20:44.841 }' 00:20:44.841 01:41:57 -- target/tls.sh@208 -- # killprocess 3808444 00:20:44.841 01:41:57 -- common/autotest_common.sh@926 -- # '[' -z 3808444 ']' 00:20:44.841 01:41:57 -- common/autotest_common.sh@930 -- # kill -0 3808444 00:20:44.841 01:41:57 -- common/autotest_common.sh@931 -- # uname 00:20:44.841 01:41:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:44.841 01:41:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3808444 00:20:44.841 01:41:57 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:44.841 01:41:57 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:44.841 01:41:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3808444' 00:20:44.841 killing process with pid 3808444 00:20:44.841 01:41:57 -- common/autotest_common.sh@945 -- # kill 3808444 00:20:44.841 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.841 00:20:44.841 Latency(us) 00:20:44.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.841 =================================================================================================================== 00:20:44.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.841 01:41:57 -- common/autotest_common.sh@950 -- # wait 3808444 00:20:45.101 01:41:58 -- target/tls.sh@209 -- # killprocess 3808143 00:20:45.101 01:41:58 -- common/autotest_common.sh@926 -- # '[' -z 3808143 ']' 00:20:45.101 01:41:58 -- common/autotest_common.sh@930 -- # kill -0 3808143 00:20:45.101 01:41:58 -- common/autotest_common.sh@931 -- # uname 00:20:45.101 01:41:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:45.101 01:41:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3808143 00:20:45.101 01:41:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:45.101 01:41:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:45.101 01:41:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3808143' 00:20:45.101 killing process with pid 3808143 00:20:45.101 01:41:58 -- common/autotest_common.sh@945 -- # kill 3808143 00:20:45.101 01:41:58 -- common/autotest_common.sh@950 -- # wait 3808143 00:20:45.360 01:41:58 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:45.360 01:41:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:45.360 01:41:58 -- target/tls.sh@212 -- # echo '{ 00:20:45.360 "subsystems": [ 00:20:45.360 { 00:20:45.360 "subsystem": "iobuf", 00:20:45.360 "config": [ 00:20:45.360 { 00:20:45.360 "method": "iobuf_set_options", 00:20:45.360 "params": { 00:20:45.360 "small_pool_count": 8192, 00:20:45.360 "large_pool_count": 1024, 00:20:45.360 "small_bufsize": 8192, 00:20:45.360 "large_bufsize": 135168 00:20:45.360 } 00:20:45.360 } 00:20:45.360 ] 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "subsystem": "sock", 00:20:45.360 "config": [ 00:20:45.360 { 00:20:45.360 "method": "sock_impl_set_options", 00:20:45.360 "params": { 00:20:45.360 "impl_name": "posix", 00:20:45.360 "recv_buf_size": 2097152, 00:20:45.360 "send_buf_size": 2097152, 00:20:45.360 "enable_recv_pipe": true, 00:20:45.360 "enable_quickack": false, 00:20:45.360 "enable_placement_id": 0, 00:20:45.360 "enable_zerocopy_send_server": true, 00:20:45.360 "enable_zerocopy_send_client": false, 00:20:45.360 "zerocopy_threshold": 0, 00:20:45.360 "tls_version": 0, 00:20:45.360 "enable_ktls": false 00:20:45.360 } 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "method": "sock_impl_set_options", 00:20:45.360 "params": { 00:20:45.360 "impl_name": "ssl", 00:20:45.360 "recv_buf_size": 4096, 00:20:45.360 "send_buf_size": 4096, 00:20:45.360 "enable_recv_pipe": true, 00:20:45.360 "enable_quickack": false, 00:20:45.360 "enable_placement_id": 0, 00:20:45.360 "enable_zerocopy_send_server": true, 00:20:45.360 "enable_zerocopy_send_client": false, 00:20:45.360 "zerocopy_threshold": 0, 00:20:45.360 "tls_version": 0, 00:20:45.360 "enable_ktls": false 00:20:45.360 } 00:20:45.360 } 00:20:45.360 ] 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "subsystem": "vmd", 00:20:45.360 "config": [] 00:20:45.360 }, 00:20:45.360 { 00:20:45.360 "subsystem": "accel", 00:20:45.360 "config": [ 00:20:45.360 { 00:20:45.360 "method": "accel_set_options", 00:20:45.360 "params": { 00:20:45.360 "small_cache_size": 128, 00:20:45.360 "large_cache_size": 16, 00:20:45.360 "task_count": 2048, 00:20:45.360 "sequence_count": 2048, 00:20:45.360 "buf_count": 2048 00:20:45.360 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "bdev", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "bdev_set_options", 00:20:45.361 "params": { 00:20:45.361 "bdev_io_pool_size": 65535, 00:20:45.361 "bdev_io_cache_size": 256, 00:20:45.361 "bdev_auto_examine": true, 00:20:45.361 "iobuf_small_cache_size": 128, 00:20:45.361 "iobuf_large_cache_size": 16 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_raid_set_options", 00:20:45.361 "params": { 00:20:45.361 "process_window_size_kb": 1024 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_iscsi_set_options", 00:20:45.361 "params": { 00:20:45.361 "timeout_sec": 30 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_nvme_set_options", 00:20:45.361 "params": { 00:20:45.361 "action_on_timeout": "none", 00:20:45.361 "timeout_us": 0, 00:20:45.361 "timeout_admin_us": 0, 00:20:45.361 "keep_alive_timeout_ms": 10000, 00:20:45.361 "transport_retry_count": 4, 00:20:45.361 "arbitration_burst": 0, 00:20:45.361 "low_priority_weight": 0, 00:20:45.361 "medium_priority_weight": 0, 00:20:45.361 "high_priority_weight": 0, 00:20:45.361 "nvme_adminq_poll_period_us": 10000, 00:20:45.361 "nvme_ioq_poll_period_us": 0, 00:20:45.361 "io_queue_requests": 0, 00:20:45.361 "delay_cmd_submit": true, 00:20:45.361 "bdev_retry_count": 3, 00:20:45.361 "transport_ack_timeout": 0, 00:20:45.361 "ctrlr_loss_timeout_sec": 0, 00:20:45.361 "reconnect_delay_sec": 0, 00:20:45.361 "fast_io_fail_timeout_sec": 0, 00:20:45.361 "generate_uuids": false, 00:20:45.361 "transport_tos": 0, 00:20:45.361 "io_path_stat": false, 00:20:45.361 "allow_accel_sequence": false 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_nvme_set_hotplug", 00:20:45.361 "params": { 00:20:45.361 "period_us": 100000, 00:20:45.361 "enable": false 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_malloc_create", 00:20:45.361 "params": { 00:20:45.361 "name": "malloc0", 00:20:45.361 "num_blocks": 8192, 00:20:45.361 "block_size": 4096, 00:20:45.361 "physical_block_size": 4096, 00:20:45.361 "uuid": "f5ac5f51-814d-4e55-b4cd-419580a989d3", 00:20:45.361 "optimal_io_boundary": 0 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "bdev_wait_for_examine" 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "nbd", 00:20:45.361 "config": [] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "scheduler", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "framework_set_scheduler", 00:20:45.361 "params": { 00:20:45.361 "name": "static" 00:20:45.361 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "subsystem": "nvmf", 00:20:45.361 "config": [ 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_config", 00:20:45.361 "params": { 00:20:45.361 "discovery_filter": "match_any", 00:20:45.361 "admin_cmd_passthru": { 00:20:45.361 "identify_ctrlr": false 00:20:45.361 } 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_max_subsystems", 00:20:45.361 "params": { 00:20:45.361 "max_subsystems": 1024 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_set_crdt", 00:20:45.361 "params": { 00:20:45.361 "crdt1": 0, 00:20:45.361 "crdt2": 0, 00:20:45.361 "crdt3": 0 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_create_transport", 00:20:45.361 "params": { 00:20:45.361 "trtype": "TCP", 00:20:45.361 "max_queue_depth": 128, 00:20:45.361 "max_io_qpairs_per_ctrlr": 127, 00:20:45.361 "in_capsule_data_size": 4096, 00:20:45.361 "max_io_size": 131072, 00:20:45.361 "io_unit_size": 131072, 00:20:45.361 "max_aq_depth": 128, 00:20:45.361 "num_shared_buffers": 511, 00:20:45.361 "buf_cache_size": 4294967295, 00:20:45.361 "dif_insert_or_strip": false, 00:20:45.361 "zcopy": false, 00:20:45.361 "c2h_success": false, 00:20:45.361 "sock_priority": 0, 00:20:45.361 "abort_timeout_sec": 1 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_create_subsystem", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "allow_any_host": false, 00:20:45.361 "serial_number": "SPDK00000000000001", 00:20:45.361 "model_number": "SPDK bdev Controller", 00:20:45.361 "max_namespaces": 10, 00:20:45.361 "min_cntlid": 1, 00:20:45.361 "max_cntlid": 65519, 00:20:45.361 "ana_reporting": false 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_host", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "host": "nqn.2016-06.io.spdk:host1", 00:20:45.361 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_ns", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "namespace": { 00:20:45.361 "nsid": 1, 00:20:45.361 "bdev_name": "malloc0", 00:20:45.361 "nguid": "F5AC5F51814D4E55B4CD419580A989D3", 00:20:45.361 "uuid": "f5ac5f51-814d-4e55-b4cd-419580a989d3" 00:20:45.361 } 00:20:45.361 } 00:20:45.361 }, 00:20:45.361 { 00:20:45.361 "method": "nvmf_subsystem_add_listener", 00:20:45.361 "params": { 00:20:45.361 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.361 "listen_address": { 00:20:45.361 "trtype": "TCP", 00:20:45.361 "adrfam": "IPv4", 00:20:45.361 "traddr": "10.0.0.2", 00:20:45.361 "trsvcid": "4420" 00:20:45.361 }, 00:20:45.361 "secure_channel": true 00:20:45.361 } 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 } 00:20:45.361 ] 00:20:45.361 }' 00:20:45.361 01:41:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:45.361 01:41:58 -- common/autotest_common.sh@10 -- # set +x 00:20:45.361 01:41:58 -- nvmf/common.sh@469 -- # nvmfpid=3808734 00:20:45.361 01:41:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:45.361 01:41:58 -- nvmf/common.sh@470 -- # waitforlisten 3808734 00:20:45.361 01:41:58 -- common/autotest_common.sh@819 -- # '[' -z 3808734 ']' 00:20:45.361 01:41:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.361 01:41:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:45.361 01:41:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.361 01:41:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:45.361 01:41:58 -- common/autotest_common.sh@10 -- # set +x 00:20:45.361 [2024-07-23 01:41:58.435526] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:45.361 [2024-07-23 01:41:58.435629] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:45.621 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.621 [2024-07-23 01:41:58.498416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.621 [2024-07-23 01:41:58.580705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:45.621 [2024-07-23 01:41:58.580858] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.621 [2024-07-23 01:41:58.580878] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.621 [2024-07-23 01:41:58.580893] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.621 [2024-07-23 01:41:58.580935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.882 [2024-07-23 01:41:58.807545] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.882 [2024-07-23 01:41:58.839559] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.882 [2024-07-23 01:41:58.839803] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:46.448 01:41:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:46.449 01:41:59 -- common/autotest_common.sh@852 -- # return 0 00:20:46.449 01:41:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:46.449 01:41:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:46.449 01:41:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.449 01:41:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:46.449 01:41:59 -- target/tls.sh@216 -- # bdevperf_pid=3808888 00:20:46.449 01:41:59 -- target/tls.sh@217 -- # waitforlisten 3808888 /var/tmp/bdevperf.sock 00:20:46.449 01:41:59 -- common/autotest_common.sh@819 -- # '[' -z 3808888 ']' 00:20:46.449 01:41:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:46.449 01:41:59 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:46.449 01:41:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:46.449 01:41:59 -- target/tls.sh@213 -- # echo '{ 00:20:46.449 "subsystems": [ 00:20:46.449 { 00:20:46.449 "subsystem": "iobuf", 00:20:46.449 "config": [ 00:20:46.449 { 00:20:46.449 "method": "iobuf_set_options", 00:20:46.449 "params": { 00:20:46.449 "small_pool_count": 8192, 00:20:46.449 "large_pool_count": 1024, 00:20:46.449 "small_bufsize": 8192, 00:20:46.449 "large_bufsize": 135168 00:20:46.449 } 00:20:46.449 } 00:20:46.449 ] 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "subsystem": "sock", 00:20:46.449 "config": [ 00:20:46.449 { 00:20:46.449 "method": "sock_impl_set_options", 00:20:46.449 "params": { 00:20:46.449 "impl_name": "posix", 00:20:46.449 "recv_buf_size": 2097152, 00:20:46.449 "send_buf_size": 2097152, 00:20:46.449 "enable_recv_pipe": true, 00:20:46.449 "enable_quickack": false, 00:20:46.449 "enable_placement_id": 0, 00:20:46.449 "enable_zerocopy_send_server": true, 00:20:46.449 "enable_zerocopy_send_client": false, 00:20:46.449 "zerocopy_threshold": 0, 00:20:46.449 "tls_version": 0, 00:20:46.449 "enable_ktls": false 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "sock_impl_set_options", 00:20:46.449 "params": { 00:20:46.449 "impl_name": "ssl", 00:20:46.449 "recv_buf_size": 4096, 00:20:46.449 "send_buf_size": 4096, 00:20:46.449 "enable_recv_pipe": true, 00:20:46.449 "enable_quickack": false, 00:20:46.449 "enable_placement_id": 0, 00:20:46.449 "enable_zerocopy_send_server": true, 00:20:46.449 "enable_zerocopy_send_client": false, 00:20:46.449 "zerocopy_threshold": 0, 00:20:46.449 "tls_version": 0, 00:20:46.449 "enable_ktls": false 00:20:46.449 } 00:20:46.449 } 00:20:46.449 ] 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "subsystem": "vmd", 00:20:46.449 "config": [] 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "subsystem": "accel", 00:20:46.449 "config": [ 00:20:46.449 { 00:20:46.449 "method": "accel_set_options", 00:20:46.449 "params": { 00:20:46.449 "small_cache_size": 128, 00:20:46.449 "large_cache_size": 16, 00:20:46.449 "task_count": 2048, 00:20:46.449 "sequence_count": 2048, 00:20:46.449 "buf_count": 2048 00:20:46.449 } 00:20:46.449 } 00:20:46.449 ] 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "subsystem": "bdev", 00:20:46.449 "config": [ 00:20:46.449 { 00:20:46.449 "method": "bdev_set_options", 00:20:46.449 "params": { 00:20:46.449 "bdev_io_pool_size": 65535, 00:20:46.449 "bdev_io_cache_size": 256, 00:20:46.449 "bdev_auto_examine": true, 00:20:46.449 "iobuf_small_cache_size": 128, 00:20:46.449 "iobuf_large_cache_size": 16 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_raid_set_options", 00:20:46.449 "params": { 00:20:46.449 "process_window_size_kb": 1024 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_iscsi_set_options", 00:20:46.449 "params": { 00:20:46.449 "timeout_sec": 30 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_nvme_set_options", 00:20:46.449 "params": { 00:20:46.449 "action_on_timeout": "none", 00:20:46.449 "timeout_us": 0, 00:20:46.449 "timeout_admin_us": 0, 00:20:46.449 "keep_alive_timeout_ms": 10000, 00:20:46.449 "transport_retry_count": 4, 00:20:46.449 "arbitration_burst": 0, 00:20:46.449 "low_priority_weight": 0, 00:20:46.449 "medium_priority_weight": 0, 00:20:46.449 "high_priority_weight": 0, 00:20:46.449 "nvme_adminq_poll_period_us": 10000, 00:20:46.449 "nvme_ioq_poll_period_us": 0, 00:20:46.449 "io_queue_requests": 512, 00:20:46.449 "delay_cmd_submit": true, 00:20:46.449 "bdev_retry_count": 3, 00:20:46.449 "transport_ack_timeout": 0, 00:20:46.449 "ctrlr_loss_timeout_sec": 0, 00:20:46.449 "reconnect_delay_sec": 0, 00:20:46.449 "fast_io_fail_timeout_sec": 0, 00:20:46.449 "generate_uuids": false, 00:20:46.449 "transport_tos": 0, 00:20:46.449 "io_path_stat": false, 00:20:46.449 "allow_accel_sequence": false 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_nvme_attach_controller", 00:20:46.449 "params": { 00:20:46.449 "name": "TLSTEST", 00:20:46.449 "trtype": "TCP", 00:20:46.449 "adrfam": "IPv4", 00:20:46.449 "traddr": "10.0.0.2", 00:20:46.449 "trsvcid": "4420", 00:20:46.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.449 "prchk_reftag": false, 00:20:46.449 "prchk_guard": false, 00:20:46.449 "ctrlr_loss_timeout_sec": 0, 00:20:46.449 "reconnect_delay_sec": 0, 00:20:46.449 "fast_io_fail_timeout_sec": 0, 00:20:46.449 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:46.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.449 "hdgst": false, 00:20:46.449 "ddgst": false 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_nvme_set_hotplug", 00:20:46.449 "params": { 00:20:46.449 "period_us": 100000, 00:20:46.449 "enable": false 00:20:46.449 } 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "method": "bdev_wait_for_examine" 00:20:46.449 } 00:20:46.449 ] 00:20:46.449 }, 00:20:46.449 { 00:20:46.449 "subsystem": "nbd", 00:20:46.449 "config": [] 00:20:46.449 } 00:20:46.449 ] 00:20:46.449 }' 00:20:46.449 01:41:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:46.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:46.449 01:41:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:46.449 01:41:59 -- common/autotest_common.sh@10 -- # set +x 00:20:46.449 [2024-07-23 01:41:59.474472] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:46.449 [2024-07-23 01:41:59.474544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808888 ] 00:20:46.449 EAL: No free 2048 kB hugepages reported on node 1 00:20:46.449 [2024-07-23 01:41:59.531455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.707 [2024-07-23 01:41:59.615716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.707 [2024-07-23 01:41:59.773121] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.642 01:42:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:47.642 01:42:00 -- common/autotest_common.sh@852 -- # return 0 00:20:47.642 01:42:00 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:47.642 Running I/O for 10 seconds... 00:20:57.619 00:20:57.619 Latency(us) 00:20:57.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.619 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:57.619 Verification LBA range: start 0x0 length 0x2000 00:20:57.619 TLSTESTn1 : 10.03 2306.59 9.01 0.00 0.00 55414.13 5024.43 56700.78 00:20:57.619 =================================================================================================================== 00:20:57.619 Total : 2306.59 9.01 0.00 0.00 55414.13 5024.43 56700.78 00:20:57.619 0 00:20:57.619 01:42:10 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:57.619 01:42:10 -- target/tls.sh@223 -- # killprocess 3808888 00:20:57.619 01:42:10 -- common/autotest_common.sh@926 -- # '[' -z 3808888 ']' 00:20:57.619 01:42:10 -- common/autotest_common.sh@930 -- # kill -0 3808888 00:20:57.619 01:42:10 -- common/autotest_common.sh@931 -- # uname 00:20:57.619 01:42:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.619 01:42:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3808888 00:20:57.619 01:42:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:57.619 01:42:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:57.619 01:42:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3808888' 00:20:57.619 killing process with pid 3808888 00:20:57.619 01:42:10 -- common/autotest_common.sh@945 -- # kill 3808888 00:20:57.619 Received shutdown signal, test time was about 10.000000 seconds 00:20:57.619 00:20:57.619 Latency(us) 00:20:57.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:57.619 =================================================================================================================== 00:20:57.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:57.619 01:42:10 -- common/autotest_common.sh@950 -- # wait 3808888 00:20:57.879 01:42:10 -- target/tls.sh@224 -- # killprocess 3808734 00:20:57.879 01:42:10 -- common/autotest_common.sh@926 -- # '[' -z 3808734 ']' 00:20:57.879 01:42:10 -- common/autotest_common.sh@930 -- # kill -0 3808734 00:20:57.879 01:42:10 -- common/autotest_common.sh@931 -- # uname 00:20:57.879 01:42:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:57.879 01:42:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3808734 00:20:57.879 01:42:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:57.879 01:42:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:57.879 01:42:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3808734' 00:20:57.879 killing process with pid 3808734 00:20:57.879 01:42:10 -- common/autotest_common.sh@945 -- # kill 3808734 00:20:57.879 01:42:10 -- common/autotest_common.sh@950 -- # wait 3808734 00:20:58.137 01:42:11 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:58.137 01:42:11 -- target/tls.sh@227 -- # cleanup 00:20:58.137 01:42:11 -- target/tls.sh@15 -- # process_shm --id 0 00:20:58.137 01:42:11 -- common/autotest_common.sh@796 -- # type=--id 00:20:58.137 01:42:11 -- common/autotest_common.sh@797 -- # id=0 00:20:58.137 01:42:11 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:58.137 01:42:11 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:58.137 01:42:11 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:58.137 01:42:11 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:58.137 01:42:11 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:58.137 01:42:11 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:58.137 nvmf_trace.0 00:20:58.137 01:42:11 -- common/autotest_common.sh@811 -- # return 0 00:20:58.137 01:42:11 -- target/tls.sh@16 -- # killprocess 3808888 00:20:58.137 01:42:11 -- common/autotest_common.sh@926 -- # '[' -z 3808888 ']' 00:20:58.137 01:42:11 -- common/autotest_common.sh@930 -- # kill -0 3808888 00:20:58.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3808888) - No such process 00:20:58.137 01:42:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3808888 is not found' 00:20:58.137 Process with pid 3808888 is not found 00:20:58.137 01:42:11 -- target/tls.sh@17 -- # nvmftestfini 00:20:58.137 01:42:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:58.137 01:42:11 -- nvmf/common.sh@116 -- # sync 00:20:58.137 01:42:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:58.137 01:42:11 -- nvmf/common.sh@119 -- # set +e 00:20:58.137 01:42:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:58.137 01:42:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:58.137 rmmod nvme_tcp 00:20:58.137 rmmod nvme_fabrics 00:20:58.137 rmmod nvme_keyring 00:20:58.137 01:42:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:58.137 01:42:11 -- nvmf/common.sh@123 -- # set -e 00:20:58.137 01:42:11 -- nvmf/common.sh@124 -- # return 0 00:20:58.137 01:42:11 -- nvmf/common.sh@477 -- # '[' -n 3808734 ']' 00:20:58.137 01:42:11 -- nvmf/common.sh@478 -- # killprocess 3808734 00:20:58.137 01:42:11 -- common/autotest_common.sh@926 -- # '[' -z 3808734 ']' 00:20:58.137 01:42:11 -- common/autotest_common.sh@930 -- # kill -0 3808734 00:20:58.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3808734) - No such process 00:20:58.137 01:42:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3808734 is not found' 00:20:58.137 Process with pid 3808734 is not found 00:20:58.137 01:42:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:58.137 01:42:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:58.137 01:42:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:58.137 01:42:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:58.137 01:42:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:58.137 01:42:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.137 01:42:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:58.137 01:42:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.672 01:42:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:00.672 01:42:13 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:00.672 00:21:00.672 real 1m13.062s 00:21:00.672 user 1m53.967s 00:21:00.672 sys 0m26.783s 00:21:00.672 01:42:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:00.672 01:42:13 -- common/autotest_common.sh@10 -- # set +x 00:21:00.672 ************************************ 00:21:00.672 END TEST nvmf_tls 00:21:00.672 ************************************ 00:21:00.672 01:42:13 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:00.672 01:42:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:00.672 01:42:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:00.672 01:42:13 -- common/autotest_common.sh@10 -- # set +x 00:21:00.672 ************************************ 00:21:00.672 START TEST nvmf_fips 00:21:00.672 ************************************ 00:21:00.672 01:42:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:00.672 * Looking for test storage... 00:21:00.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:00.672 01:42:13 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.672 01:42:13 -- nvmf/common.sh@7 -- # uname -s 00:21:00.672 01:42:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.672 01:42:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.672 01:42:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.672 01:42:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.672 01:42:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.672 01:42:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.672 01:42:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.672 01:42:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.672 01:42:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.672 01:42:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.672 01:42:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.672 01:42:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.672 01:42:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.672 01:42:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.672 01:42:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.672 01:42:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.672 01:42:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.672 01:42:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.672 01:42:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.672 01:42:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.672 01:42:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.672 01:42:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.672 01:42:13 -- paths/export.sh@5 -- # export PATH 00:21:00.672 01:42:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.672 01:42:13 -- nvmf/common.sh@46 -- # : 0 00:21:00.672 01:42:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:00.672 01:42:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:00.672 01:42:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:00.672 01:42:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.672 01:42:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.672 01:42:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:00.672 01:42:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:00.672 01:42:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:00.672 01:42:13 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:00.672 01:42:13 -- fips/fips.sh@89 -- # check_openssl_version 00:21:00.672 01:42:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:00.672 01:42:13 -- fips/fips.sh@85 -- # openssl version 00:21:00.672 01:42:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:00.672 01:42:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:00.672 01:42:13 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:00.672 01:42:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:00.672 01:42:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:00.672 01:42:13 -- scripts/common.sh@335 -- # IFS=.-: 00:21:00.672 01:42:13 -- scripts/common.sh@335 -- # read -ra ver1 00:21:00.672 01:42:13 -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.672 01:42:13 -- scripts/common.sh@336 -- # read -ra ver2 00:21:00.672 01:42:13 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:00.672 01:42:13 -- scripts/common.sh@339 -- # ver1_l=3 00:21:00.672 01:42:13 -- scripts/common.sh@340 -- # ver2_l=3 00:21:00.672 01:42:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:00.672 01:42:13 -- scripts/common.sh@343 -- # case "$op" in 00:21:00.672 01:42:13 -- scripts/common.sh@347 -- # : 1 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # decimal 3 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=3 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 3 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # decimal 3 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=3 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 3 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:00.672 01:42:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:00.672 01:42:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v++ )) 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # decimal 0 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=0 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 0 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # decimal 0 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=0 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 0 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:00.672 01:42:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:00.672 01:42:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v++ )) 00:21:00.672 01:42:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # decimal 9 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=9 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 9 00:21:00.672 01:42:13 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # decimal 0 00:21:00.672 01:42:13 -- scripts/common.sh@352 -- # local d=0 00:21:00.672 01:42:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:00.672 01:42:13 -- scripts/common.sh@354 -- # echo 0 00:21:00.672 01:42:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:00.672 01:42:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:00.672 01:42:13 -- scripts/common.sh@366 -- # return 0 00:21:00.672 01:42:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:00.672 01:42:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:00.672 01:42:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:00.672 01:42:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:00.672 01:42:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:00.672 01:42:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:00.672 01:42:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:00.672 01:42:13 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:00.672 01:42:13 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:00.672 01:42:13 -- fips/fips.sh@114 -- # build_openssl_config 00:21:00.672 01:42:13 -- fips/fips.sh@37 -- # cat 00:21:00.672 01:42:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:00.672 01:42:13 -- fips/fips.sh@58 -- # cat - 00:21:00.672 01:42:13 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:00.672 01:42:13 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:00.672 01:42:13 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:00.672 01:42:13 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:00.672 01:42:13 -- fips/fips.sh@117 -- # openssl list -providers 00:21:00.672 01:42:13 -- fips/fips.sh@117 -- # grep name 00:21:00.672 01:42:13 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:00.672 01:42:13 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:00.672 01:42:13 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:00.672 01:42:13 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:00.672 01:42:13 -- fips/fips.sh@128 -- # : 00:21:00.672 01:42:13 -- common/autotest_common.sh@640 -- # local es=0 00:21:00.672 01:42:13 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:00.672 01:42:13 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:00.672 01:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.672 01:42:13 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:00.672 01:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.672 01:42:13 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:00.672 01:42:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:00.672 01:42:13 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:00.672 01:42:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:00.672 01:42:13 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:00.672 Error setting digest 00:21:00.672 000250438F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:00.672 000250438F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:00.672 01:42:13 -- common/autotest_common.sh@643 -- # es=1 00:21:00.672 01:42:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:00.672 01:42:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:00.672 01:42:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:00.672 01:42:13 -- fips/fips.sh@131 -- # nvmftestinit 00:21:00.672 01:42:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:00.672 01:42:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.672 01:42:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:00.672 01:42:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:00.672 01:42:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:00.672 01:42:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.672 01:42:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.672 01:42:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.672 01:42:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:00.672 01:42:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:00.672 01:42:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:00.672 01:42:13 -- common/autotest_common.sh@10 -- # set +x 00:21:02.605 01:42:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.605 01:42:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:02.605 01:42:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:02.605 01:42:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:02.605 01:42:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:02.605 01:42:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:02.605 01:42:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:02.605 01:42:15 -- nvmf/common.sh@294 -- # net_devs=() 00:21:02.605 01:42:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:02.605 01:42:15 -- nvmf/common.sh@295 -- # e810=() 00:21:02.605 01:42:15 -- nvmf/common.sh@295 -- # local -ga e810 00:21:02.605 01:42:15 -- nvmf/common.sh@296 -- # x722=() 00:21:02.605 01:42:15 -- nvmf/common.sh@296 -- # local -ga x722 00:21:02.605 01:42:15 -- nvmf/common.sh@297 -- # mlx=() 00:21:02.605 01:42:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:02.605 01:42:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.605 01:42:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:02.605 01:42:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:02.605 01:42:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:02.605 01:42:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:02.605 01:42:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:02.605 01:42:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:02.605 01:42:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:02.605 01:42:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.605 01:42:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:02.605 01:42:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:02.605 01:42:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:02.606 01:42:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.606 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.606 01:42:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:02.606 01:42:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:02.606 01:42:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.606 01:42:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:02.606 01:42:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.606 01:42:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.606 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.606 01:42:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.606 01:42:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:02.606 01:42:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.606 01:42:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:02.606 01:42:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.606 01:42:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.606 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.606 01:42:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.606 01:42:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:02.606 01:42:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:02.606 01:42:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:02.606 01:42:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.606 01:42:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.606 01:42:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.606 01:42:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:02.606 01:42:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.606 01:42:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.606 01:42:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:02.606 01:42:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.606 01:42:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.606 01:42:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:02.606 01:42:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:02.606 01:42:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.606 01:42:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.606 01:42:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.606 01:42:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.606 01:42:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:02.606 01:42:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.606 01:42:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.606 01:42:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.606 01:42:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:02.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:21:02.606 00:21:02.606 --- 10.0.0.2 ping statistics --- 00:21:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.606 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:21:02.606 01:42:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:21:02.606 00:21:02.606 --- 10.0.0.1 ping statistics --- 00:21:02.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.606 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:21:02.606 01:42:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.606 01:42:15 -- nvmf/common.sh@410 -- # return 0 00:21:02.606 01:42:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:02.606 01:42:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.606 01:42:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:02.606 01:42:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.606 01:42:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:02.606 01:42:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:02.606 01:42:15 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:02.606 01:42:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:02.606 01:42:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:02.606 01:42:15 -- common/autotest_common.sh@10 -- # set +x 00:21:02.606 01:42:15 -- nvmf/common.sh@469 -- # nvmfpid=3812859 00:21:02.606 01:42:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:02.606 01:42:15 -- nvmf/common.sh@470 -- # waitforlisten 3812859 00:21:02.606 01:42:15 -- common/autotest_common.sh@819 -- # '[' -z 3812859 ']' 00:21:02.606 01:42:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.606 01:42:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.606 01:42:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.606 01:42:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.606 01:42:15 -- common/autotest_common.sh@10 -- # set +x 00:21:02.864 [2024-07-23 01:42:15.731355] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:02.864 [2024-07-23 01:42:15.731425] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.864 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.864 [2024-07-23 01:42:15.798220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.864 [2024-07-23 01:42:15.884083] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.864 [2024-07-23 01:42:15.884267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.864 [2024-07-23 01:42:15.884285] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.864 [2024-07-23 01:42:15.884298] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.864 [2024-07-23 01:42:15.884329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.797 01:42:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.798 01:42:16 -- common/autotest_common.sh@852 -- # return 0 00:21:03.798 01:42:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.798 01:42:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:03.798 01:42:16 -- common/autotest_common.sh@10 -- # set +x 00:21:03.798 01:42:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.798 01:42:16 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:03.798 01:42:16 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:03.798 01:42:16 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.798 01:42:16 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:03.798 01:42:16 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.798 01:42:16 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.798 01:42:16 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:03.798 01:42:16 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:03.798 [2024-07-23 01:42:16.891768] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.056 [2024-07-23 01:42:16.907738] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:04.056 [2024-07-23 01:42:16.907977] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:04.056 malloc0 00:21:04.056 01:42:16 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:04.056 01:42:16 -- fips/fips.sh@148 -- # bdevperf_pid=3813120 00:21:04.056 01:42:16 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:04.056 01:42:16 -- fips/fips.sh@149 -- # waitforlisten 3813120 /var/tmp/bdevperf.sock 00:21:04.056 01:42:16 -- common/autotest_common.sh@819 -- # '[' -z 3813120 ']' 00:21:04.056 01:42:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:04.056 01:42:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.056 01:42:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:04.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:04.056 01:42:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.056 01:42:16 -- common/autotest_common.sh@10 -- # set +x 00:21:04.056 [2024-07-23 01:42:17.024516] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:04.056 [2024-07-23 01:42:17.024598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3813120 ] 00:21:04.056 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.056 [2024-07-23 01:42:17.082386] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.316 [2024-07-23 01:42:17.165220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.882 01:42:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:04.882 01:42:17 -- common/autotest_common.sh@852 -- # return 0 00:21:04.882 01:42:17 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.141 [2024-07-23 01:42:18.192714] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:05.402 TLSTESTn1 00:21:05.402 01:42:18 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:05.402 Running I/O for 10 seconds... 00:21:15.384 00:21:15.384 Latency(us) 00:21:15.384 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.384 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:15.384 Verification LBA range: start 0x0 length 0x2000 00:21:15.384 TLSTESTn1 : 10.03 2309.60 9.02 0.00 0.00 55339.73 11019.76 58254.22 00:21:15.384 =================================================================================================================== 00:21:15.384 Total : 2309.60 9.02 0.00 0.00 55339.73 11019.76 58254.22 00:21:15.384 0 00:21:15.384 01:42:28 -- fips/fips.sh@1 -- # cleanup 00:21:15.384 01:42:28 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:15.384 01:42:28 -- common/autotest_common.sh@796 -- # type=--id 00:21:15.384 01:42:28 -- common/autotest_common.sh@797 -- # id=0 00:21:15.384 01:42:28 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:15.384 01:42:28 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:15.384 01:42:28 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:15.384 01:42:28 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:15.384 01:42:28 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:15.384 01:42:28 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:15.384 nvmf_trace.0 00:21:15.644 01:42:28 -- common/autotest_common.sh@811 -- # return 0 00:21:15.644 01:42:28 -- fips/fips.sh@16 -- # killprocess 3813120 00:21:15.644 01:42:28 -- common/autotest_common.sh@926 -- # '[' -z 3813120 ']' 00:21:15.644 01:42:28 -- common/autotest_common.sh@930 -- # kill -0 3813120 00:21:15.644 01:42:28 -- common/autotest_common.sh@931 -- # uname 00:21:15.644 01:42:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:15.644 01:42:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3813120 00:21:15.644 01:42:28 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:15.644 01:42:28 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:15.644 01:42:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3813120' 00:21:15.644 killing process with pid 3813120 00:21:15.644 01:42:28 -- common/autotest_common.sh@945 -- # kill 3813120 00:21:15.644 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.644 00:21:15.644 Latency(us) 00:21:15.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.644 =================================================================================================================== 00:21:15.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:15.644 01:42:28 -- common/autotest_common.sh@950 -- # wait 3813120 00:21:15.903 01:42:28 -- fips/fips.sh@17 -- # nvmftestfini 00:21:15.903 01:42:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:15.903 01:42:28 -- nvmf/common.sh@116 -- # sync 00:21:15.903 01:42:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:15.903 01:42:28 -- nvmf/common.sh@119 -- # set +e 00:21:15.903 01:42:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:15.903 01:42:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:15.903 rmmod nvme_tcp 00:21:15.903 rmmod nvme_fabrics 00:21:15.903 rmmod nvme_keyring 00:21:15.903 01:42:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:15.903 01:42:28 -- nvmf/common.sh@123 -- # set -e 00:21:15.903 01:42:28 -- nvmf/common.sh@124 -- # return 0 00:21:15.903 01:42:28 -- nvmf/common.sh@477 -- # '[' -n 3812859 ']' 00:21:15.903 01:42:28 -- nvmf/common.sh@478 -- # killprocess 3812859 00:21:15.903 01:42:28 -- common/autotest_common.sh@926 -- # '[' -z 3812859 ']' 00:21:15.903 01:42:28 -- common/autotest_common.sh@930 -- # kill -0 3812859 00:21:15.903 01:42:28 -- common/autotest_common.sh@931 -- # uname 00:21:15.903 01:42:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:15.903 01:42:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3812859 00:21:15.903 01:42:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:15.903 01:42:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:15.903 01:42:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3812859' 00:21:15.903 killing process with pid 3812859 00:21:15.903 01:42:28 -- common/autotest_common.sh@945 -- # kill 3812859 00:21:15.903 01:42:28 -- common/autotest_common.sh@950 -- # wait 3812859 00:21:16.161 01:42:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:16.161 01:42:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:16.161 01:42:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:16.161 01:42:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:16.162 01:42:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:16.162 01:42:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.162 01:42:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:16.162 01:42:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.070 01:42:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:18.070 01:42:31 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:18.070 00:21:18.070 real 0m17.867s 00:21:18.070 user 0m22.106s 00:21:18.070 sys 0m7.096s 00:21:18.070 01:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.070 01:42:31 -- common/autotest_common.sh@10 -- # set +x 00:21:18.070 ************************************ 00:21:18.070 END TEST nvmf_fips 00:21:18.070 ************************************ 00:21:18.070 01:42:31 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:18.070 01:42:31 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.070 01:42:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:18.070 01:42:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:18.070 01:42:31 -- common/autotest_common.sh@10 -- # set +x 00:21:18.070 ************************************ 00:21:18.070 START TEST nvmf_fuzz 00:21:18.070 ************************************ 00:21:18.070 01:42:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:18.329 * Looking for test storage... 00:21:18.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.329 01:42:31 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.329 01:42:31 -- nvmf/common.sh@7 -- # uname -s 00:21:18.329 01:42:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.329 01:42:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.329 01:42:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.329 01:42:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.329 01:42:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.329 01:42:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.329 01:42:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.329 01:42:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.329 01:42:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.329 01:42:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.329 01:42:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.329 01:42:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.329 01:42:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.329 01:42:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.329 01:42:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.329 01:42:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.329 01:42:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.329 01:42:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.329 01:42:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.329 01:42:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.329 01:42:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.329 01:42:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.329 01:42:31 -- paths/export.sh@5 -- # export PATH 00:21:18.329 01:42:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.329 01:42:31 -- nvmf/common.sh@46 -- # : 0 00:21:18.329 01:42:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:18.329 01:42:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:18.329 01:42:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:18.329 01:42:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.329 01:42:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.329 01:42:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:18.329 01:42:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:18.329 01:42:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:18.329 01:42:31 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:18.329 01:42:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:18.329 01:42:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.329 01:42:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:18.329 01:42:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:18.329 01:42:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:18.329 01:42:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.329 01:42:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:18.329 01:42:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.329 01:42:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:18.329 01:42:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:18.329 01:42:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:18.329 01:42:31 -- common/autotest_common.sh@10 -- # set +x 00:21:20.233 01:42:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:20.233 01:42:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:20.233 01:42:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:20.233 01:42:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:20.233 01:42:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:20.233 01:42:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:20.233 01:42:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:20.233 01:42:33 -- nvmf/common.sh@294 -- # net_devs=() 00:21:20.233 01:42:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:20.233 01:42:33 -- nvmf/common.sh@295 -- # e810=() 00:21:20.233 01:42:33 -- nvmf/common.sh@295 -- # local -ga e810 00:21:20.233 01:42:33 -- nvmf/common.sh@296 -- # x722=() 00:21:20.233 01:42:33 -- nvmf/common.sh@296 -- # local -ga x722 00:21:20.233 01:42:33 -- nvmf/common.sh@297 -- # mlx=() 00:21:20.233 01:42:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:20.233 01:42:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.233 01:42:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:20.233 01:42:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:20.233 01:42:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:20.234 01:42:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.234 01:42:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:20.234 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:20.234 01:42:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:20.234 01:42:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:20.234 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:20.234 01:42:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.234 01:42:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.234 01:42:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.234 01:42:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:20.234 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:20.234 01:42:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.234 01:42:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:20.234 01:42:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.234 01:42:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.234 01:42:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:20.234 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:20.234 01:42:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.234 01:42:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:20.234 01:42:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:20.234 01:42:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:20.234 01:42:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.234 01:42:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.234 01:42:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.234 01:42:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:20.234 01:42:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.234 01:42:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.234 01:42:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:20.234 01:42:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.234 01:42:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.234 01:42:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:20.234 01:42:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:20.234 01:42:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.234 01:42:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.234 01:42:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.234 01:42:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.234 01:42:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:20.234 01:42:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.234 01:42:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.494 01:42:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.494 01:42:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:20.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:21:20.494 00:21:20.494 --- 10.0.0.2 ping statistics --- 00:21:20.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.494 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:21:20.494 01:42:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:21:20.494 00:21:20.494 --- 10.0.0.1 ping statistics --- 00:21:20.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.494 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:20.494 01:42:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.494 01:42:33 -- nvmf/common.sh@410 -- # return 0 00:21:20.494 01:42:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:20.494 01:42:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.494 01:42:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:20.494 01:42:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:20.494 01:42:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.494 01:42:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:20.494 01:42:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:20.494 01:42:33 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3816432 00:21:20.494 01:42:33 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:20.494 01:42:33 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:20.494 01:42:33 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3816432 00:21:20.494 01:42:33 -- common/autotest_common.sh@819 -- # '[' -z 3816432 ']' 00:21:20.494 01:42:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.494 01:42:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:20.494 01:42:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.494 01:42:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:20.494 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 01:42:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:20.752 01:42:33 -- common/autotest_common.sh@852 -- # return 0 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:20.752 01:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.752 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 01:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:20.752 01:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.752 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 Malloc0 00:21:20.752 01:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:20.752 01:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.752 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 01:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:20.752 01:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.752 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 01:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:20.752 01:42:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:20.752 01:42:33 -- common/autotest_common.sh@10 -- # set +x 00:21:20.752 01:42:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:20.752 01:42:33 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:52.887 Fuzzing completed. Shutting down the fuzz application 00:21:52.887 00:21:52.887 Dumping successful admin opcodes: 00:21:52.887 8, 9, 10, 24, 00:21:52.887 Dumping successful io opcodes: 00:21:52.887 0, 9, 00:21:52.887 NS: 0x200003aeff00 I/O qp, Total commands completed: 442980, total successful commands: 2579, random_seed: 3107541824 00:21:52.887 NS: 0x200003aeff00 admin qp, Total commands completed: 54944, total successful commands: 439, random_seed: 338951552 00:21:52.887 01:43:04 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:52.887 Fuzzing completed. Shutting down the fuzz application 00:21:52.887 00:21:52.887 Dumping successful admin opcodes: 00:21:52.887 24, 00:21:52.887 Dumping successful io opcodes: 00:21:52.887 00:21:52.887 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2651536769 00:21:52.887 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2651662821 00:21:52.887 01:43:05 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.887 01:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.887 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:21:52.887 01:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.887 01:43:05 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:52.887 01:43:05 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:52.887 01:43:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:52.887 01:43:05 -- nvmf/common.sh@116 -- # sync 00:21:52.887 01:43:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:52.887 01:43:05 -- nvmf/common.sh@119 -- # set +e 00:21:52.887 01:43:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:52.887 01:43:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:52.887 rmmod nvme_tcp 00:21:52.887 rmmod nvme_fabrics 00:21:52.887 rmmod nvme_keyring 00:21:52.887 01:43:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:52.887 01:43:05 -- nvmf/common.sh@123 -- # set -e 00:21:52.887 01:43:05 -- nvmf/common.sh@124 -- # return 0 00:21:52.887 01:43:05 -- nvmf/common.sh@477 -- # '[' -n 3816432 ']' 00:21:52.887 01:43:05 -- nvmf/common.sh@478 -- # killprocess 3816432 00:21:52.887 01:43:05 -- common/autotest_common.sh@926 -- # '[' -z 3816432 ']' 00:21:52.887 01:43:05 -- common/autotest_common.sh@930 -- # kill -0 3816432 00:21:52.887 01:43:05 -- common/autotest_common.sh@931 -- # uname 00:21:52.887 01:43:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:52.887 01:43:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3816432 00:21:52.887 01:43:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:52.887 01:43:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:52.887 01:43:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3816432' 00:21:52.887 killing process with pid 3816432 00:21:52.887 01:43:05 -- common/autotest_common.sh@945 -- # kill 3816432 00:21:52.887 01:43:05 -- common/autotest_common.sh@950 -- # wait 3816432 00:21:52.887 01:43:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:52.887 01:43:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:52.887 01:43:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:52.887 01:43:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.887 01:43:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:52.887 01:43:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.887 01:43:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.887 01:43:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.795 01:43:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:54.795 01:43:07 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:55.054 00:21:55.054 real 0m36.741s 00:21:55.054 user 0m50.039s 00:21:55.054 sys 0m15.560s 00:21:55.054 01:43:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.054 01:43:07 -- common/autotest_common.sh@10 -- # set +x 00:21:55.054 ************************************ 00:21:55.054 END TEST nvmf_fuzz 00:21:55.054 ************************************ 00:21:55.054 01:43:07 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:55.054 01:43:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:55.054 01:43:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:55.054 01:43:07 -- common/autotest_common.sh@10 -- # set +x 00:21:55.054 ************************************ 00:21:55.054 START TEST nvmf_multiconnection 00:21:55.054 ************************************ 00:21:55.054 01:43:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:55.054 * Looking for test storage... 00:21:55.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:55.054 01:43:07 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.054 01:43:07 -- nvmf/common.sh@7 -- # uname -s 00:21:55.054 01:43:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.054 01:43:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.054 01:43:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.054 01:43:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.054 01:43:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.054 01:43:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.054 01:43:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.054 01:43:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.054 01:43:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.054 01:43:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.054 01:43:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.054 01:43:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.054 01:43:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.054 01:43:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.054 01:43:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.054 01:43:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.054 01:43:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.054 01:43:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.054 01:43:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.054 01:43:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.054 01:43:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.054 01:43:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.054 01:43:07 -- paths/export.sh@5 -- # export PATH 00:21:55.054 01:43:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.054 01:43:07 -- nvmf/common.sh@46 -- # : 0 00:21:55.054 01:43:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:55.054 01:43:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:55.054 01:43:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:55.054 01:43:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.054 01:43:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.054 01:43:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:55.054 01:43:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:55.054 01:43:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:55.054 01:43:07 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:55.054 01:43:07 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:55.054 01:43:07 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:55.054 01:43:07 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:55.054 01:43:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:55.054 01:43:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.054 01:43:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:55.054 01:43:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:55.054 01:43:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:55.054 01:43:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.054 01:43:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.054 01:43:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.054 01:43:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:55.054 01:43:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:55.054 01:43:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:55.054 01:43:07 -- common/autotest_common.sh@10 -- # set +x 00:21:56.960 01:43:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:56.960 01:43:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:56.960 01:43:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:56.960 01:43:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:56.960 01:43:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:56.960 01:43:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:56.960 01:43:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:56.960 01:43:09 -- nvmf/common.sh@294 -- # net_devs=() 00:21:56.960 01:43:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:56.960 01:43:09 -- nvmf/common.sh@295 -- # e810=() 00:21:56.960 01:43:09 -- nvmf/common.sh@295 -- # local -ga e810 00:21:56.960 01:43:09 -- nvmf/common.sh@296 -- # x722=() 00:21:56.960 01:43:09 -- nvmf/common.sh@296 -- # local -ga x722 00:21:56.960 01:43:09 -- nvmf/common.sh@297 -- # mlx=() 00:21:56.960 01:43:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:56.960 01:43:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.960 01:43:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:56.960 01:43:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:56.960 01:43:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:56.960 01:43:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.960 01:43:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.960 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.960 01:43:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:56.960 01:43:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.960 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.960 01:43:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:56.960 01:43:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:56.961 01:43:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:56.961 01:43:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.961 01:43:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:56.961 01:43:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.961 01:43:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.961 01:43:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.961 01:43:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:56.961 01:43:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.961 01:43:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:56.961 01:43:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.961 01:43:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.961 01:43:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.961 01:43:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:56.961 01:43:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:56.961 01:43:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:56.961 01:43:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.961 01:43:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.961 01:43:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.961 01:43:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:56.961 01:43:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.961 01:43:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.961 01:43:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:56.961 01:43:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.961 01:43:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.961 01:43:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:56.961 01:43:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:56.961 01:43:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.961 01:43:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.961 01:43:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.961 01:43:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.961 01:43:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:56.961 01:43:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.961 01:43:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.961 01:43:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.961 01:43:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:56.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:21:56.961 00:21:56.961 --- 10.0.0.2 ping statistics --- 00:21:56.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.961 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:21:56.961 01:43:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:21:56.961 00:21:56.961 --- 10.0.0.1 ping statistics --- 00:21:56.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.961 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:21:56.961 01:43:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.961 01:43:09 -- nvmf/common.sh@410 -- # return 0 00:21:56.961 01:43:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:56.961 01:43:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.961 01:43:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:56.961 01:43:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.961 01:43:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:56.961 01:43:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:56.961 01:43:10 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:56.961 01:43:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:56.961 01:43:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:56.961 01:43:10 -- common/autotest_common.sh@10 -- # set +x 00:21:56.961 01:43:10 -- nvmf/common.sh@469 -- # nvmfpid=3822166 00:21:56.961 01:43:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:56.961 01:43:10 -- nvmf/common.sh@470 -- # waitforlisten 3822166 00:21:56.961 01:43:10 -- common/autotest_common.sh@819 -- # '[' -z 3822166 ']' 00:21:56.961 01:43:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.961 01:43:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:56.961 01:43:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.961 01:43:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:56.961 01:43:10 -- common/autotest_common.sh@10 -- # set +x 00:21:57.221 [2024-07-23 01:43:10.067341] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:57.221 [2024-07-23 01:43:10.067409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.221 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.221 [2024-07-23 01:43:10.135675] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:57.221 [2024-07-23 01:43:10.229972] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:57.221 [2024-07-23 01:43:10.230131] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.221 [2024-07-23 01:43:10.230149] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.221 [2024-07-23 01:43:10.230162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.221 [2024-07-23 01:43:10.230222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.221 [2024-07-23 01:43:10.230253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:57.221 [2024-07-23 01:43:10.230283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:57.221 [2024-07-23 01:43:10.230286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.155 01:43:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.155 01:43:10 -- common/autotest_common.sh@852 -- # return 0 00:21:58.155 01:43:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.155 01:43:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:58.155 01:43:10 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.155 01:43:11 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 [2024-07-23 01:43:11.025130] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.155 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 Malloc1 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 [2024-07-23 01:43:11.082155] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.155 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 Malloc2 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.155 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 Malloc3 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.155 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 Malloc4 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.155 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.155 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.155 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:58.155 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.155 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 Malloc5 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.415 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 Malloc6 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.415 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 Malloc7 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:58.415 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.415 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.415 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.415 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.416 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 Malloc8 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.416 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 Malloc9 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.416 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 Malloc10 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.416 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.416 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:58.416 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.416 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:58.675 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.675 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.675 01:43:11 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:58.675 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.675 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 Malloc11 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:58.675 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.675 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:58.675 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.675 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:58.675 01:43:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.675 01:43:11 -- common/autotest_common.sh@10 -- # set +x 00:21:58.675 01:43:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.675 01:43:11 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:58.675 01:43:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.675 01:43:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:59.241 01:43:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:59.241 01:43:12 -- common/autotest_common.sh@1177 -- # local i=0 00:21:59.241 01:43:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.241 01:43:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:59.241 01:43:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:01.777 01:43:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:01.777 01:43:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:01.777 01:43:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:01.777 01:43:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:01.777 01:43:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.777 01:43:14 -- common/autotest_common.sh@1187 -- # return 0 00:22:01.777 01:43:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.777 01:43:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:02.035 01:43:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:02.035 01:43:14 -- common/autotest_common.sh@1177 -- # local i=0 00:22:02.035 01:43:14 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.036 01:43:14 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:02.036 01:43:14 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:03.940 01:43:16 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:03.940 01:43:16 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:03.940 01:43:16 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:03.940 01:43:16 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:03.940 01:43:16 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:03.940 01:43:16 -- common/autotest_common.sh@1187 -- # return 0 00:22:03.940 01:43:16 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:03.940 01:43:16 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:04.879 01:43:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:04.879 01:43:17 -- common/autotest_common.sh@1177 -- # local i=0 00:22:04.879 01:43:17 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.879 01:43:17 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:04.879 01:43:17 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:06.783 01:43:19 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:06.783 01:43:19 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:06.783 01:43:19 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:06.783 01:43:19 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:06.783 01:43:19 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.783 01:43:19 -- common/autotest_common.sh@1187 -- # return 0 00:22:06.783 01:43:19 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.783 01:43:19 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:07.351 01:43:20 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:07.351 01:43:20 -- common/autotest_common.sh@1177 -- # local i=0 00:22:07.351 01:43:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.351 01:43:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:07.351 01:43:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:09.882 01:43:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:09.882 01:43:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:09.882 01:43:22 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:09.882 01:43:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:09.882 01:43:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.882 01:43:22 -- common/autotest_common.sh@1187 -- # return 0 00:22:09.883 01:43:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.883 01:43:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:10.141 01:43:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:10.141 01:43:23 -- common/autotest_common.sh@1177 -- # local i=0 00:22:10.141 01:43:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.141 01:43:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:10.141 01:43:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:12.103 01:43:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:12.103 01:43:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:12.103 01:43:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:12.103 01:43:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:12.103 01:43:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.103 01:43:25 -- common/autotest_common.sh@1187 -- # return 0 00:22:12.103 01:43:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.103 01:43:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:13.040 01:43:25 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:13.040 01:43:25 -- common/autotest_common.sh@1177 -- # local i=0 00:22:13.040 01:43:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:13.040 01:43:25 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:13.040 01:43:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:14.944 01:43:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:14.944 01:43:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:14.944 01:43:27 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:14.944 01:43:27 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:14.944 01:43:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:14.944 01:43:27 -- common/autotest_common.sh@1187 -- # return 0 00:22:14.944 01:43:27 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:14.944 01:43:27 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:15.511 01:43:28 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:15.511 01:43:28 -- common/autotest_common.sh@1177 -- # local i=0 00:22:15.511 01:43:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.511 01:43:28 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:15.511 01:43:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:18.041 01:43:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:18.041 01:43:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:18.041 01:43:30 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:18.041 01:43:30 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:18.041 01:43:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.041 01:43:30 -- common/autotest_common.sh@1187 -- # return 0 00:22:18.041 01:43:30 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.041 01:43:30 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:18.609 01:43:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:18.609 01:43:31 -- common/autotest_common.sh@1177 -- # local i=0 00:22:18.609 01:43:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.609 01:43:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:18.609 01:43:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:20.514 01:43:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:20.514 01:43:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:20.514 01:43:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:20.514 01:43:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:20.514 01:43:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.514 01:43:33 -- common/autotest_common.sh@1187 -- # return 0 00:22:20.514 01:43:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.514 01:43:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:21.452 01:43:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:21.452 01:43:34 -- common/autotest_common.sh@1177 -- # local i=0 00:22:21.452 01:43:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.452 01:43:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:21.452 01:43:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:23.356 01:43:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:23.356 01:43:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:23.356 01:43:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:23.356 01:43:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:23.356 01:43:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.356 01:43:36 -- common/autotest_common.sh@1187 -- # return 0 00:22:23.356 01:43:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.356 01:43:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:24.293 01:43:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:24.293 01:43:37 -- common/autotest_common.sh@1177 -- # local i=0 00:22:24.293 01:43:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:24.293 01:43:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:24.293 01:43:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:26.198 01:43:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:26.198 01:43:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:26.198 01:43:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:26.198 01:43:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:26.198 01:43:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.198 01:43:39 -- common/autotest_common.sh@1187 -- # return 0 00:22:26.198 01:43:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.198 01:43:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:27.133 01:43:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:27.133 01:43:40 -- common/autotest_common.sh@1177 -- # local i=0 00:22:27.133 01:43:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.133 01:43:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:27.133 01:43:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:29.054 01:43:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:29.054 01:43:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:29.054 01:43:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:29.054 01:43:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:29.054 01:43:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:29.054 01:43:42 -- common/autotest_common.sh@1187 -- # return 0 00:22:29.054 01:43:42 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:29.054 [global] 00:22:29.054 thread=1 00:22:29.054 invalidate=1 00:22:29.054 rw=read 00:22:29.054 time_based=1 00:22:29.054 runtime=10 00:22:29.054 ioengine=libaio 00:22:29.054 direct=1 00:22:29.054 bs=262144 00:22:29.054 iodepth=64 00:22:29.054 norandommap=1 00:22:29.054 numjobs=1 00:22:29.054 00:22:29.311 [job0] 00:22:29.311 filename=/dev/nvme0n1 00:22:29.311 [job1] 00:22:29.312 filename=/dev/nvme10n1 00:22:29.312 [job2] 00:22:29.312 filename=/dev/nvme1n1 00:22:29.312 [job3] 00:22:29.312 filename=/dev/nvme2n1 00:22:29.312 [job4] 00:22:29.312 filename=/dev/nvme3n1 00:22:29.312 [job5] 00:22:29.312 filename=/dev/nvme4n1 00:22:29.312 [job6] 00:22:29.312 filename=/dev/nvme5n1 00:22:29.312 [job7] 00:22:29.312 filename=/dev/nvme6n1 00:22:29.312 [job8] 00:22:29.312 filename=/dev/nvme7n1 00:22:29.312 [job9] 00:22:29.312 filename=/dev/nvme8n1 00:22:29.312 [job10] 00:22:29.312 filename=/dev/nvme9n1 00:22:29.312 Could not set queue depth (nvme0n1) 00:22:29.312 Could not set queue depth (nvme10n1) 00:22:29.312 Could not set queue depth (nvme1n1) 00:22:29.312 Could not set queue depth (nvme2n1) 00:22:29.312 Could not set queue depth (nvme3n1) 00:22:29.312 Could not set queue depth (nvme4n1) 00:22:29.312 Could not set queue depth (nvme5n1) 00:22:29.312 Could not set queue depth (nvme6n1) 00:22:29.312 Could not set queue depth (nvme7n1) 00:22:29.312 Could not set queue depth (nvme8n1) 00:22:29.312 Could not set queue depth (nvme9n1) 00:22:29.570 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:29.570 fio-3.35 00:22:29.570 Starting 11 threads 00:22:41.811 00:22:41.811 job0: (groupid=0, jobs=1): err= 0: pid=3826668: Tue Jul 23 01:43:52 2024 00:22:41.811 read: IOPS=737, BW=184MiB/s (193MB/s)(1858MiB/10072msec) 00:22:41.811 slat (usec): min=12, max=102664, avg=1232.80, stdev=4153.42 00:22:41.811 clat (usec): min=1513, max=242155, avg=85422.21, stdev=35426.33 00:22:41.811 lat (usec): min=1534, max=242195, avg=86655.01, stdev=36004.76 00:22:41.811 clat percentiles (msec): 00:22:41.811 | 1.00th=[ 5], 5.00th=[ 30], 10.00th=[ 43], 20.00th=[ 57], 00:22:41.811 | 30.00th=[ 66], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 93], 00:22:41.811 | 70.00th=[ 103], 80.00th=[ 111], 90.00th=[ 131], 95.00th=[ 146], 00:22:41.811 | 99.00th=[ 182], 99.50th=[ 197], 99.90th=[ 215], 99.95th=[ 222], 00:22:41.811 | 99.99th=[ 243] 00:22:41.811 bw ( KiB/s): min=100352, max=328704, per=10.45%, avg=188591.70, stdev=56480.29, samples=20 00:22:41.811 iops : min= 392, max= 1284, avg=736.55, stdev=220.59, samples=20 00:22:41.811 lat (msec) : 2=0.13%, 4=0.62%, 10=0.96%, 20=0.69%, 50=11.45% 00:22:41.811 lat (msec) : 100=52.71%, 250=33.45% 00:22:41.811 cpu : usr=0.48%, sys=2.52%, ctx=1618, majf=0, minf=3721 00:22:41.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:41.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.811 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.811 issued rwts: total=7433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.811 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.811 job1: (groupid=0, jobs=1): err= 0: pid=3826669: Tue Jul 23 01:43:52 2024 00:22:41.811 read: IOPS=510, BW=128MiB/s (134MB/s)(1281MiB/10039msec) 00:22:41.811 slat (usec): min=10, max=80537, avg=1835.20, stdev=5210.75 00:22:41.811 clat (msec): min=5, max=247, avg=123.46, stdev=51.15 00:22:41.811 lat (msec): min=5, max=247, avg=125.30, stdev=51.97 00:22:41.811 clat percentiles (msec): 00:22:41.811 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 59], 00:22:41.811 | 30.00th=[ 109], 40.00th=[ 128], 50.00th=[ 138], 60.00th=[ 148], 00:22:41.811 | 70.00th=[ 157], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 190], 00:22:41.811 | 99.00th=[ 215], 99.50th=[ 224], 99.90th=[ 243], 99.95th=[ 245], 00:22:41.811 | 99.99th=[ 249] 00:22:41.811 bw ( KiB/s): min=87552, max=376590, per=7.18%, avg=129487.10, stdev=66306.08, samples=20 00:22:41.811 iops : min= 342, max= 1471, avg=505.80, stdev=259.00, samples=20 00:22:41.811 lat (msec) : 10=0.68%, 20=0.18%, 50=16.26%, 100=10.07%, 250=72.81% 00:22:41.811 cpu : usr=0.27%, sys=1.77%, ctx=1213, majf=0, minf=4097 00:22:41.811 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:41.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=5124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job2: (groupid=0, jobs=1): err= 0: pid=3826670: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=584, BW=146MiB/s (153MB/s)(1472MiB/10079msec) 00:22:41.812 slat (usec): min=13, max=122040, avg=1463.20, stdev=6049.52 00:22:41.812 clat (usec): min=1911, max=278641, avg=108030.49, stdev=60298.69 00:22:41.812 lat (usec): min=1931, max=278685, avg=109493.69, stdev=61326.48 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 18], 20.00th=[ 38], 00:22:41.812 | 30.00th=[ 68], 40.00th=[ 100], 50.00th=[ 124], 60.00th=[ 140], 00:22:41.812 | 70.00th=[ 150], 80.00th=[ 161], 90.00th=[ 176], 95.00th=[ 194], 00:22:41.812 | 99.00th=[ 226], 99.50th=[ 236], 99.90th=[ 259], 99.95th=[ 259], 00:22:41.812 | 99.99th=[ 279] 00:22:41.812 bw ( KiB/s): min=93184, max=306176, per=8.26%, avg=149086.55, stdev=54614.81, samples=20 00:22:41.812 iops : min= 364, max= 1196, avg=582.35, stdev=213.34, samples=20 00:22:41.812 lat (msec) : 2=0.03%, 4=0.51%, 10=3.70%, 20=7.07%, 50=13.57% 00:22:41.812 lat (msec) : 100=15.39%, 250=59.59%, 500=0.14% 00:22:41.812 cpu : usr=0.48%, sys=1.80%, ctx=1511, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=5887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job3: (groupid=0, jobs=1): err= 0: pid=3826671: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=485, BW=121MiB/s (127MB/s)(1223MiB/10079msec) 00:22:41.812 slat (usec): min=11, max=115552, avg=1959.45, stdev=5922.76 00:22:41.812 clat (msec): min=6, max=291, avg=129.85, stdev=39.79 00:22:41.812 lat (msec): min=6, max=291, avg=131.81, stdev=40.52 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 46], 5.00th=[ 69], 10.00th=[ 80], 20.00th=[ 90], 00:22:41.812 | 30.00th=[ 105], 40.00th=[ 123], 50.00th=[ 134], 60.00th=[ 146], 00:22:41.812 | 70.00th=[ 155], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 188], 00:22:41.812 | 99.00th=[ 222], 99.50th=[ 228], 99.90th=[ 243], 99.95th=[ 257], 00:22:41.812 | 99.99th=[ 292] 00:22:41.812 bw ( KiB/s): min=85504, max=211456, per=6.85%, avg=123520.45, stdev=31617.83, samples=20 00:22:41.812 iops : min= 334, max= 826, avg=482.45, stdev=123.47, samples=20 00:22:41.812 lat (msec) : 10=0.08%, 20=0.59%, 50=0.70%, 100=25.46%, 250=73.09% 00:22:41.812 lat (msec) : 500=0.08% 00:22:41.812 cpu : usr=0.27%, sys=1.69%, ctx=1130, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job4: (groupid=0, jobs=1): err= 0: pid=3826672: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=946, BW=237MiB/s (248MB/s)(2377MiB/10047msec) 00:22:41.812 slat (usec): min=13, max=62352, avg=945.59, stdev=2919.14 00:22:41.812 clat (msec): min=4, max=260, avg=66.63, stdev=33.45 00:22:41.812 lat (msec): min=4, max=260, avg=67.58, stdev=33.94 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 15], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 40], 00:22:41.812 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 61], 60.00th=[ 70], 00:22:41.812 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 133], 00:22:41.812 | 99.00th=[ 192], 99.50th=[ 209], 99.90th=[ 220], 99.95th=[ 226], 00:22:41.812 | 99.99th=[ 262] 00:22:41.812 bw ( KiB/s): min=109056, max=446464, per=13.40%, avg=241692.35, stdev=84511.93, samples=20 00:22:41.812 iops : min= 426, max= 1744, avg=944.00, stdev=330.17, samples=20 00:22:41.812 lat (msec) : 10=0.32%, 20=1.46%, 50=39.04%, 100=46.19%, 250=12.98% 00:22:41.812 lat (msec) : 500=0.01% 00:22:41.812 cpu : usr=0.50%, sys=3.09%, ctx=1981, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=9508,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job5: (groupid=0, jobs=1): err= 0: pid=3826673: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=627, BW=157MiB/s (165MB/s)(1577MiB/10052msec) 00:22:41.812 slat (usec): min=11, max=113617, avg=1336.40, stdev=5159.76 00:22:41.812 clat (msec): min=4, max=267, avg=100.58, stdev=49.31 00:22:41.812 lat (msec): min=4, max=267, avg=101.92, stdev=50.14 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 62], 00:22:41.812 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 90], 60.00th=[ 106], 00:22:41.812 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 169], 95.00th=[ 178], 00:22:41.812 | 99.00th=[ 203], 99.50th=[ 209], 99.90th=[ 236], 99.95th=[ 247], 00:22:41.812 | 99.99th=[ 268] 00:22:41.812 bw ( KiB/s): min=92672, max=232960, per=8.86%, avg=159827.95, stdev=46962.64, samples=20 00:22:41.812 iops : min= 362, max= 910, avg=624.20, stdev=183.37, samples=20 00:22:41.812 lat (msec) : 10=1.44%, 20=3.14%, 50=11.29%, 100=41.65%, 250=42.45% 00:22:41.812 lat (msec) : 500=0.03% 00:22:41.812 cpu : usr=0.35%, sys=2.11%, ctx=1451, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=6308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job6: (groupid=0, jobs=1): err= 0: pid=3826674: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=490, BW=123MiB/s (129MB/s)(1231MiB/10038msec) 00:22:41.812 slat (usec): min=9, max=156290, avg=1643.50, stdev=5830.78 00:22:41.812 clat (msec): min=2, max=384, avg=128.75, stdev=58.14 00:22:41.812 lat (msec): min=2, max=384, avg=130.39, stdev=58.83 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 6], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 92], 00:22:41.812 | 30.00th=[ 107], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 150], 00:22:41.812 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 182], 95.00th=[ 201], 00:22:41.812 | 99.00th=[ 330], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:22:41.812 | 99.99th=[ 384] 00:22:41.812 bw ( KiB/s): min=74240, max=235991, per=6.90%, avg=124442.65, stdev=41120.30, samples=20 00:22:41.812 iops : min= 290, max= 921, avg=486.05, stdev=160.51, samples=20 00:22:41.812 lat (msec) : 4=0.43%, 10=2.15%, 20=4.02%, 50=4.87%, 100=14.54% 00:22:41.812 lat (msec) : 250=72.77%, 500=1.22% 00:22:41.812 cpu : usr=0.22%, sys=1.56%, ctx=1340, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job7: (groupid=0, jobs=1): err= 0: pid=3826675: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=627, BW=157MiB/s (165MB/s)(1580MiB/10066msec) 00:22:41.812 slat (usec): min=9, max=119647, avg=1208.35, stdev=5280.43 00:22:41.812 clat (msec): min=2, max=266, avg=100.68, stdev=55.50 00:22:41.812 lat (msec): min=2, max=266, avg=101.89, stdev=56.35 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 46], 00:22:41.812 | 30.00th=[ 60], 40.00th=[ 77], 50.00th=[ 93], 60.00th=[ 123], 00:22:41.812 | 70.00th=[ 142], 80.00th=[ 155], 90.00th=[ 171], 95.00th=[ 186], 00:22:41.812 | 99.00th=[ 222], 99.50th=[ 241], 99.90th=[ 257], 99.95th=[ 262], 00:22:41.812 | 99.99th=[ 268] 00:22:41.812 bw ( KiB/s): min=82944, max=391168, per=8.87%, avg=160081.45, stdev=75636.01, samples=20 00:22:41.812 iops : min= 324, max= 1528, avg=625.15, stdev=295.51, samples=20 00:22:41.812 lat (msec) : 4=0.13%, 10=2.17%, 20=2.94%, 50=16.79%, 100=31.97% 00:22:41.812 lat (msec) : 250=45.63%, 500=0.36% 00:22:41.812 cpu : usr=0.36%, sys=1.93%, ctx=1577, majf=0, minf=4097 00:22:41.812 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:41.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.812 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.812 issued rwts: total=6318,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.812 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.812 job8: (groupid=0, jobs=1): err= 0: pid=3826676: Tue Jul 23 01:43:52 2024 00:22:41.812 read: IOPS=768, BW=192MiB/s (201MB/s)(1939MiB/10092msec) 00:22:41.812 slat (usec): min=10, max=164556, avg=1084.20, stdev=4042.90 00:22:41.812 clat (usec): min=1454, max=302906, avg=82131.06, stdev=36681.46 00:22:41.812 lat (usec): min=1475, max=302918, avg=83215.27, stdev=37162.11 00:22:41.812 clat percentiles (msec): 00:22:41.812 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 50], 00:22:41.812 | 30.00th=[ 63], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 92], 00:22:41.812 | 70.00th=[ 99], 80.00th=[ 110], 90.00th=[ 125], 95.00th=[ 142], 00:22:41.812 | 99.00th=[ 197], 99.50th=[ 222], 99.90th=[ 222], 99.95th=[ 224], 00:22:41.813 | 99.99th=[ 305] 00:22:41.813 bw ( KiB/s): min=105472, max=310272, per=10.91%, avg=196867.55, stdev=58071.20, samples=20 00:22:41.813 iops : min= 412, max= 1212, avg=768.95, stdev=226.85, samples=20 00:22:41.813 lat (msec) : 2=0.04%, 4=0.15%, 10=1.15%, 20=2.78%, 50=16.50% 00:22:41.813 lat (msec) : 100=51.08%, 250=28.27%, 500=0.01% 00:22:41.813 cpu : usr=0.43%, sys=2.53%, ctx=1716, majf=0, minf=4097 00:22:41.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:22:41.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.813 issued rwts: total=7756,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.813 job9: (groupid=0, jobs=1): err= 0: pid=3826677: Tue Jul 23 01:43:52 2024 00:22:41.813 read: IOPS=592, BW=148MiB/s (155MB/s)(1495MiB/10090msec) 00:22:41.813 slat (usec): min=11, max=115151, avg=1313.51, stdev=5474.44 00:22:41.813 clat (usec): min=1376, max=288712, avg=106633.30, stdev=47194.61 00:22:41.813 lat (usec): min=1395, max=288735, avg=107946.81, stdev=47857.14 00:22:41.813 clat percentiles (msec): 00:22:41.813 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 77], 00:22:41.813 | 30.00th=[ 87], 40.00th=[ 96], 50.00th=[ 107], 60.00th=[ 116], 00:22:41.813 | 70.00th=[ 131], 80.00th=[ 148], 90.00th=[ 167], 95.00th=[ 182], 00:22:41.813 | 99.00th=[ 218], 99.50th=[ 245], 99.90th=[ 266], 99.95th=[ 284], 00:22:41.813 | 99.99th=[ 288] 00:22:41.813 bw ( KiB/s): min=89088, max=281088, per=8.39%, avg=151369.50, stdev=48943.97, samples=20 00:22:41.813 iops : min= 348, max= 1098, avg=591.15, stdev=191.22, samples=20 00:22:41.813 lat (msec) : 2=0.05%, 4=0.30%, 10=2.09%, 20=3.86%, 50=6.77% 00:22:41.813 lat (msec) : 100=29.89%, 250=56.62%, 500=0.40% 00:22:41.813 cpu : usr=0.34%, sys=1.99%, ctx=1484, majf=0, minf=4097 00:22:41.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:41.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.813 issued rwts: total=5978,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.813 job10: (groupid=0, jobs=1): err= 0: pid=3826678: Tue Jul 23 01:43:52 2024 00:22:41.813 read: IOPS=692, BW=173MiB/s (182MB/s)(1748MiB/10091msec) 00:22:41.813 slat (usec): min=10, max=143755, avg=963.73, stdev=4076.81 00:22:41.813 clat (usec): min=1084, max=234298, avg=91368.48, stdev=47225.01 00:22:41.813 lat (usec): min=1152, max=305597, avg=92332.21, stdev=47700.02 00:22:41.813 clat percentiles (msec): 00:22:41.813 | 1.00th=[ 6], 5.00th=[ 18], 10.00th=[ 29], 20.00th=[ 47], 00:22:41.813 | 30.00th=[ 63], 40.00th=[ 78], 50.00th=[ 92], 60.00th=[ 103], 00:22:41.813 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 155], 95.00th=[ 169], 00:22:41.813 | 99.00th=[ 209], 99.50th=[ 224], 99.90th=[ 226], 99.95th=[ 226], 00:22:41.813 | 99.99th=[ 234] 00:22:41.813 bw ( KiB/s): min=81920, max=307608, per=9.83%, avg=177297.25, stdev=62215.58, samples=20 00:22:41.813 iops : min= 320, max= 1201, avg=692.45, stdev=243.04, samples=20 00:22:41.813 lat (msec) : 2=0.20%, 4=0.21%, 10=1.62%, 20=3.79%, 50=15.55% 00:22:41.813 lat (msec) : 100=37.31%, 250=41.32% 00:22:41.813 cpu : usr=0.37%, sys=2.04%, ctx=1815, majf=0, minf=4097 00:22:41.813 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:41.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:41.813 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:41.813 issued rwts: total=6990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:41.813 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:41.813 00:22:41.813 Run status group 0 (all jobs): 00:22:41.813 READ: bw=1762MiB/s (1847MB/s), 121MiB/s-237MiB/s (127MB/s-248MB/s), io=17.4GiB (18.6GB), run=10038-10092msec 00:22:41.813 00:22:41.813 Disk stats (read/write): 00:22:41.813 nvme0n1: ios=14650/0, merge=0/0, ticks=1233843/0, in_queue=1233843, util=97.25% 00:22:41.813 nvme10n1: ios=9941/0, merge=0/0, ticks=1232725/0, in_queue=1232725, util=97.45% 00:22:41.813 nvme1n1: ios=11559/0, merge=0/0, ticks=1233477/0, in_queue=1233477, util=97.70% 00:22:41.813 nvme2n1: ios=9571/0, merge=0/0, ticks=1230799/0, in_queue=1230799, util=97.85% 00:22:41.813 nvme3n1: ios=18815/0, merge=0/0, ticks=1237972/0, in_queue=1237972, util=97.94% 00:22:41.813 nvme4n1: ios=12420/0, merge=0/0, ticks=1234895/0, in_queue=1234895, util=98.25% 00:22:41.813 nvme5n1: ios=9593/0, merge=0/0, ticks=1235155/0, in_queue=1235155, util=98.40% 00:22:41.813 nvme6n1: ios=12422/0, merge=0/0, ticks=1234696/0, in_queue=1234696, util=98.52% 00:22:41.813 nvme7n1: ios=15339/0, merge=0/0, ticks=1232607/0, in_queue=1232607, util=98.92% 00:22:41.813 nvme8n1: ios=11766/0, merge=0/0, ticks=1233377/0, in_queue=1233377, util=99.09% 00:22:41.813 nvme9n1: ios=13798/0, merge=0/0, ticks=1237614/0, in_queue=1237614, util=99.20% 00:22:41.813 01:43:52 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:41.813 [global] 00:22:41.813 thread=1 00:22:41.813 invalidate=1 00:22:41.813 rw=randwrite 00:22:41.813 time_based=1 00:22:41.813 runtime=10 00:22:41.813 ioengine=libaio 00:22:41.813 direct=1 00:22:41.813 bs=262144 00:22:41.813 iodepth=64 00:22:41.813 norandommap=1 00:22:41.813 numjobs=1 00:22:41.813 00:22:41.813 [job0] 00:22:41.813 filename=/dev/nvme0n1 00:22:41.813 [job1] 00:22:41.813 filename=/dev/nvme10n1 00:22:41.813 [job2] 00:22:41.813 filename=/dev/nvme1n1 00:22:41.813 [job3] 00:22:41.813 filename=/dev/nvme2n1 00:22:41.813 [job4] 00:22:41.813 filename=/dev/nvme3n1 00:22:41.813 [job5] 00:22:41.813 filename=/dev/nvme4n1 00:22:41.813 [job6] 00:22:41.813 filename=/dev/nvme5n1 00:22:41.813 [job7] 00:22:41.813 filename=/dev/nvme6n1 00:22:41.813 [job8] 00:22:41.813 filename=/dev/nvme7n1 00:22:41.813 [job9] 00:22:41.813 filename=/dev/nvme8n1 00:22:41.813 [job10] 00:22:41.813 filename=/dev/nvme9n1 00:22:41.813 Could not set queue depth (nvme0n1) 00:22:41.813 Could not set queue depth (nvme10n1) 00:22:41.813 Could not set queue depth (nvme1n1) 00:22:41.813 Could not set queue depth (nvme2n1) 00:22:41.813 Could not set queue depth (nvme3n1) 00:22:41.813 Could not set queue depth (nvme4n1) 00:22:41.813 Could not set queue depth (nvme5n1) 00:22:41.813 Could not set queue depth (nvme6n1) 00:22:41.813 Could not set queue depth (nvme7n1) 00:22:41.813 Could not set queue depth (nvme8n1) 00:22:41.813 Could not set queue depth (nvme9n1) 00:22:41.813 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:41.813 fio-3.35 00:22:41.813 Starting 11 threads 00:22:51.797 00:22:51.797 job0: (groupid=0, jobs=1): err= 0: pid=3827724: Tue Jul 23 01:44:03 2024 00:22:51.797 write: IOPS=631, BW=158MiB/s (166MB/s)(1588MiB/10056msec); 0 zone resets 00:22:51.797 slat (usec): min=19, max=40076, avg=1153.08, stdev=2932.03 00:22:51.797 clat (msec): min=3, max=240, avg=100.15, stdev=48.39 00:22:51.797 lat (msec): min=3, max=240, avg=101.30, stdev=49.08 00:22:51.797 clat percentiles (msec): 00:22:51.797 | 1.00th=[ 10], 5.00th=[ 26], 10.00th=[ 40], 20.00th=[ 60], 00:22:51.797 | 30.00th=[ 71], 40.00th=[ 80], 50.00th=[ 93], 60.00th=[ 110], 00:22:51.797 | 70.00th=[ 130], 80.00th=[ 146], 90.00th=[ 167], 95.00th=[ 186], 00:22:51.797 | 99.00th=[ 211], 99.50th=[ 220], 99.90th=[ 230], 99.95th=[ 236], 00:22:51.797 | 99.99th=[ 241] 00:22:51.797 bw ( KiB/s): min=79360, max=292864, per=12.05%, avg=160947.20, stdev=59366.97, samples=20 00:22:51.797 iops : min= 310, max= 1144, avg=628.70, stdev=231.90, samples=20 00:22:51.797 lat (msec) : 4=0.05%, 10=0.96%, 20=2.06%, 50=11.69%, 100=39.45% 00:22:51.797 lat (msec) : 250=45.80% 00:22:51.797 cpu : usr=1.98%, sys=2.26%, ctx=3427, majf=0, minf=1 00:22:51.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.797 issued rwts: total=0,6350,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.797 job1: (groupid=0, jobs=1): err= 0: pid=3827741: Tue Jul 23 01:44:03 2024 00:22:51.797 write: IOPS=348, BW=87.1MiB/s (91.3MB/s)(886MiB/10175msec); 0 zone resets 00:22:51.797 slat (usec): min=24, max=109388, avg=2588.41, stdev=6266.17 00:22:51.797 clat (msec): min=5, max=559, avg=181.00, stdev=104.55 00:22:51.797 lat (msec): min=5, max=559, avg=183.59, stdev=106.01 00:22:51.797 clat percentiles (msec): 00:22:51.797 | 1.00th=[ 21], 5.00th=[ 60], 10.00th=[ 67], 20.00th=[ 75], 00:22:51.797 | 30.00th=[ 94], 40.00th=[ 146], 50.00th=[ 188], 60.00th=[ 207], 00:22:51.797 | 70.00th=[ 224], 80.00th=[ 249], 90.00th=[ 309], 95.00th=[ 388], 00:22:51.797 | 99.00th=[ 527], 99.50th=[ 550], 99.90th=[ 558], 99.95th=[ 558], 00:22:51.797 | 99.99th=[ 558] 00:22:51.797 bw ( KiB/s): min=30720, max=221696, per=6.67%, avg=89113.60, stdev=52168.09, samples=20 00:22:51.797 iops : min= 120, max= 866, avg=348.10, stdev=203.78, samples=20 00:22:51.797 lat (msec) : 10=0.17%, 20=0.85%, 50=2.85%, 100=28.12%, 250=49.51% 00:22:51.797 lat (msec) : 500=17.15%, 750=1.35% 00:22:51.797 cpu : usr=1.26%, sys=0.97%, ctx=1287, majf=0, minf=1 00:22:51.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.797 issued rwts: total=0,3545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.797 job2: (groupid=0, jobs=1): err= 0: pid=3827743: Tue Jul 23 01:44:03 2024 00:22:51.797 write: IOPS=571, BW=143MiB/s (150MB/s)(1449MiB/10139msec); 0 zone resets 00:22:51.797 slat (usec): min=23, max=82704, avg=975.44, stdev=3459.18 00:22:51.797 clat (usec): min=1880, max=483380, avg=110931.13, stdev=67580.49 00:22:51.797 lat (usec): min=1987, max=483438, avg=111906.58, stdev=68244.02 00:22:51.797 clat percentiles (msec): 00:22:51.797 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 37], 20.00th=[ 57], 00:22:51.797 | 30.00th=[ 77], 40.00th=[ 90], 50.00th=[ 100], 60.00th=[ 112], 00:22:51.797 | 70.00th=[ 133], 80.00th=[ 153], 90.00th=[ 197], 95.00th=[ 228], 00:22:51.797 | 99.00th=[ 368], 99.50th=[ 393], 99.90th=[ 468], 99.95th=[ 477], 00:22:51.797 | 99.99th=[ 485] 00:22:51.797 bw ( KiB/s): min=44544, max=236544, per=10.98%, avg=146739.20, stdev=48240.07, samples=20 00:22:51.797 iops : min= 174, max= 924, avg=573.20, stdev=188.44, samples=20 00:22:51.797 lat (msec) : 2=0.02%, 4=0.14%, 10=0.97%, 20=2.59%, 50=13.06% 00:22:51.797 lat (msec) : 100=33.58%, 250=46.26%, 500=3.38% 00:22:51.797 cpu : usr=1.88%, sys=2.05%, ctx=3788, majf=0, minf=1 00:22:51.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.797 issued rwts: total=0,5795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.797 job3: (groupid=0, jobs=1): err= 0: pid=3827745: Tue Jul 23 01:44:03 2024 00:22:51.797 write: IOPS=595, BW=149MiB/s (156MB/s)(1503MiB/10101msec); 0 zone resets 00:22:51.797 slat (usec): min=21, max=51125, avg=1283.77, stdev=3291.67 00:22:51.797 clat (msec): min=2, max=311, avg=106.17, stdev=59.54 00:22:51.797 lat (msec): min=2, max=311, avg=107.45, stdev=60.23 00:22:51.797 clat percentiles (msec): 00:22:51.797 | 1.00th=[ 12], 5.00th=[ 42], 10.00th=[ 54], 20.00th=[ 65], 00:22:51.797 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 87], 60.00th=[ 95], 00:22:51.797 | 70.00th=[ 113], 80.00th=[ 153], 90.00th=[ 207], 95.00th=[ 239], 00:22:51.797 | 99.00th=[ 279], 99.50th=[ 296], 99.90th=[ 305], 99.95th=[ 309], 00:22:51.797 | 99.99th=[ 313] 00:22:51.797 bw ( KiB/s): min=57344, max=284672, per=11.40%, avg=152320.00, stdev=59304.01, samples=20 00:22:51.797 iops : min= 224, max= 1112, avg=595.00, stdev=231.66, samples=20 00:22:51.797 lat (msec) : 4=0.03%, 10=0.72%, 20=1.30%, 50=6.82%, 100=55.43% 00:22:51.797 lat (msec) : 250=32.73%, 500=2.98% 00:22:51.797 cpu : usr=2.05%, sys=1.70%, ctx=2748, majf=0, minf=1 00:22:51.797 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:51.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.797 issued rwts: total=0,6013,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.797 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.797 job4: (groupid=0, jobs=1): err= 0: pid=3827746: Tue Jul 23 01:44:03 2024 00:22:51.797 write: IOPS=266, BW=66.7MiB/s (69.9MB/s)(678MiB/10173msec); 0 zone resets 00:22:51.797 slat (usec): min=24, max=114202, avg=3518.54, stdev=7654.03 00:22:51.797 clat (msec): min=7, max=554, avg=236.33, stdev=84.93 00:22:51.797 lat (msec): min=9, max=554, avg=239.85, stdev=85.96 00:22:51.797 clat percentiles (msec): 00:22:51.797 | 1.00th=[ 33], 5.00th=[ 140], 10.00th=[ 161], 20.00th=[ 178], 00:22:51.797 | 30.00th=[ 199], 40.00th=[ 211], 50.00th=[ 224], 60.00th=[ 239], 00:22:51.797 | 70.00th=[ 259], 80.00th=[ 279], 90.00th=[ 347], 95.00th=[ 409], 00:22:51.797 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 558], 99.95th=[ 558], 00:22:51.797 | 99.99th=[ 558] 00:22:51.797 bw ( KiB/s): min=30720, max=114688, per=5.08%, avg=67814.40, stdev=20213.15, samples=20 00:22:51.797 iops : min= 120, max= 448, avg=264.90, stdev=78.96, samples=20 00:22:51.797 lat (msec) : 10=0.07%, 20=0.29%, 50=1.22%, 100=2.29%, 250=62.66% 00:22:51.797 lat (msec) : 500=31.55%, 750=1.92% 00:22:51.797 cpu : usr=0.85%, sys=0.86%, ctx=906, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,2713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job5: (groupid=0, jobs=1): err= 0: pid=3827747: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=494, BW=124MiB/s (130MB/s)(1259MiB/10179msec); 0 zone resets 00:22:51.798 slat (usec): min=22, max=91148, avg=1660.27, stdev=4512.40 00:22:51.798 clat (msec): min=3, max=401, avg=127.58, stdev=82.46 00:22:51.798 lat (msec): min=3, max=401, avg=129.24, stdev=83.61 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 12], 5.00th=[ 25], 10.00th=[ 40], 20.00th=[ 57], 00:22:51.798 | 30.00th=[ 78], 40.00th=[ 87], 50.00th=[ 100], 60.00th=[ 122], 00:22:51.798 | 70.00th=[ 167], 80.00th=[ 209], 90.00th=[ 245], 95.00th=[ 284], 00:22:51.798 | 99.00th=[ 368], 99.50th=[ 388], 99.90th=[ 397], 99.95th=[ 401], 00:22:51.798 | 99.99th=[ 401] 00:22:51.798 bw ( KiB/s): min=49152, max=252928, per=9.53%, avg=127334.40, stdev=63220.81, samples=20 00:22:51.798 iops : min= 192, max= 988, avg=497.40, stdev=246.96, samples=20 00:22:51.798 lat (msec) : 4=0.04%, 10=0.75%, 20=1.87%, 50=11.20%, 100=36.63% 00:22:51.798 lat (msec) : 250=40.58%, 500=8.93% 00:22:51.798 cpu : usr=1.41%, sys=2.06%, ctx=2356, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,5037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job6: (groupid=0, jobs=1): err= 0: pid=3827748: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=398, BW=99.6MiB/s (104MB/s)(1009MiB/10123msec); 0 zone resets 00:22:51.798 slat (usec): min=19, max=64644, avg=2016.38, stdev=5185.33 00:22:51.798 clat (usec): min=1538, max=309762, avg=158408.22, stdev=75634.12 00:22:51.798 lat (usec): min=1575, max=309821, avg=160424.60, stdev=76830.93 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 38], 20.00th=[ 87], 00:22:51.798 | 30.00th=[ 131], 40.00th=[ 153], 50.00th=[ 163], 60.00th=[ 182], 00:22:51.798 | 70.00th=[ 209], 80.00th=[ 232], 90.00th=[ 251], 95.00th=[ 262], 00:22:51.798 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 309], 99.95th=[ 309], 00:22:51.798 | 99.99th=[ 309] 00:22:51.798 bw ( KiB/s): min=55296, max=203264, per=7.61%, avg=101689.65, stdev=37583.49, samples=20 00:22:51.798 iops : min= 216, max= 794, avg=397.20, stdev=146.84, samples=20 00:22:51.798 lat (msec) : 2=0.07%, 4=0.40%, 10=2.06%, 20=3.54%, 50=6.77% 00:22:51.798 lat (msec) : 100=9.57%, 250=67.58%, 500=10.01% 00:22:51.798 cpu : usr=1.21%, sys=1.37%, ctx=2011, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,4035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job7: (groupid=0, jobs=1): err= 0: pid=3827749: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=472, BW=118MiB/s (124MB/s)(1203MiB/10175msec); 0 zone resets 00:22:51.798 slat (usec): min=24, max=90543, avg=1833.05, stdev=4177.99 00:22:51.798 clat (msec): min=4, max=343, avg=133.38, stdev=57.26 00:22:51.798 lat (msec): min=6, max=343, avg=135.21, stdev=57.91 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 25], 5.00th=[ 50], 10.00th=[ 67], 20.00th=[ 87], 00:22:51.798 | 30.00th=[ 99], 40.00th=[ 112], 50.00th=[ 129], 60.00th=[ 142], 00:22:51.798 | 70.00th=[ 159], 80.00th=[ 178], 90.00th=[ 209], 95.00th=[ 245], 00:22:51.798 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 338], 00:22:51.798 | 99.99th=[ 342] 00:22:51.798 bw ( KiB/s): min=76288, max=184320, per=9.10%, avg=121548.80, stdev=34496.88, samples=20 00:22:51.798 iops : min= 298, max= 720, avg=474.80, stdev=134.75, samples=20 00:22:51.798 lat (msec) : 10=0.19%, 20=0.42%, 50=4.82%, 100=26.02%, 250=64.27% 00:22:51.798 lat (msec) : 500=4.28% 00:22:51.798 cpu : usr=1.35%, sys=1.60%, ctx=1830, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,4811,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job8: (groupid=0, jobs=1): err= 0: pid=3827770: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=611, BW=153MiB/s (160MB/s)(1545MiB/10102msec); 0 zone resets 00:22:51.798 slat (usec): min=18, max=37811, avg=1290.44, stdev=2740.95 00:22:51.798 clat (msec): min=2, max=304, avg=103.30, stdev=39.10 00:22:51.798 lat (msec): min=3, max=304, avg=104.59, stdev=39.40 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 18], 5.00th=[ 43], 10.00th=[ 63], 20.00th=[ 75], 00:22:51.798 | 30.00th=[ 82], 40.00th=[ 90], 50.00th=[ 97], 60.00th=[ 106], 00:22:51.798 | 70.00th=[ 120], 80.00th=[ 140], 90.00th=[ 155], 95.00th=[ 169], 00:22:51.798 | 99.00th=[ 205], 99.50th=[ 236], 99.90th=[ 288], 99.95th=[ 296], 00:22:51.798 | 99.99th=[ 305] 00:22:51.798 bw ( KiB/s): min=104960, max=217088, per=11.71%, avg=156524.85, stdev=36933.95, samples=20 00:22:51.798 iops : min= 410, max= 848, avg=611.40, stdev=144.25, samples=20 00:22:51.798 lat (msec) : 4=0.05%, 10=0.42%, 20=1.07%, 50=5.29%, 100=45.99% 00:22:51.798 lat (msec) : 250=46.81%, 500=0.37% 00:22:51.798 cpu : usr=1.98%, sys=1.83%, ctx=2543, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,6178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job9: (groupid=0, jobs=1): err= 0: pid=3827785: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=434, BW=109MiB/s (114MB/s)(1107MiB/10178msec); 0 zone resets 00:22:51.798 slat (usec): min=21, max=114115, avg=2032.79, stdev=5209.68 00:22:51.798 clat (msec): min=5, max=364, avg=145.05, stdev=79.54 00:22:51.798 lat (msec): min=6, max=364, avg=147.08, stdev=80.64 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 19], 5.00th=[ 41], 10.00th=[ 61], 20.00th=[ 73], 00:22:51.798 | 30.00th=[ 78], 40.00th=[ 87], 50.00th=[ 138], 60.00th=[ 165], 00:22:51.798 | 70.00th=[ 201], 80.00th=[ 232], 90.00th=[ 262], 95.00th=[ 279], 00:22:51.798 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 330], 99.95th=[ 342], 00:22:51.798 | 99.99th=[ 363] 00:22:51.798 bw ( KiB/s): min=50688, max=230400, per=8.36%, avg=111692.80, stdev=56681.47, samples=20 00:22:51.798 iops : min= 198, max= 900, avg=436.30, stdev=221.41, samples=20 00:22:51.798 lat (msec) : 10=0.23%, 20=0.93%, 50=6.96%, 100=35.61%, 250=41.64% 00:22:51.798 lat (msec) : 500=14.64% 00:22:51.798 cpu : usr=1.43%, sys=1.49%, ctx=1755, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,4426,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 job10: (groupid=0, jobs=1): err= 0: pid=3827795: Tue Jul 23 01:44:03 2024 00:22:51.798 write: IOPS=415, BW=104MiB/s (109MB/s)(1056MiB/10177msec); 0 zone resets 00:22:51.798 slat (usec): min=19, max=45118, avg=1692.49, stdev=4558.25 00:22:51.798 clat (msec): min=2, max=436, avg=152.43, stdev=85.21 00:22:51.798 lat (msec): min=2, max=441, avg=154.13, stdev=86.42 00:22:51.798 clat percentiles (msec): 00:22:51.798 | 1.00th=[ 13], 5.00th=[ 30], 10.00th=[ 51], 20.00th=[ 88], 00:22:51.798 | 30.00th=[ 103], 40.00th=[ 115], 50.00th=[ 142], 60.00th=[ 159], 00:22:51.798 | 70.00th=[ 186], 80.00th=[ 222], 90.00th=[ 262], 95.00th=[ 317], 00:22:51.798 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 430], 99.95th=[ 430], 00:22:51.798 | 99.99th=[ 439] 00:22:51.798 bw ( KiB/s): min=44544, max=175616, per=7.97%, avg=106521.60, stdev=40711.15, samples=20 00:22:51.798 iops : min= 174, max= 686, avg=416.10, stdev=159.03, samples=20 00:22:51.798 lat (msec) : 4=0.07%, 10=0.59%, 20=2.15%, 50=7.17%, 100=17.52% 00:22:51.798 lat (msec) : 250=60.18%, 500=12.31% 00:22:51.798 cpu : usr=1.39%, sys=1.44%, ctx=2360, majf=0, minf=1 00:22:51.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:51.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:51.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:51.798 issued rwts: total=0,4224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:51.798 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:51.798 00:22:51.798 Run status group 0 (all jobs): 00:22:51.798 WRITE: bw=1305MiB/s (1368MB/s), 66.7MiB/s-158MiB/s (69.9MB/s-166MB/s), io=13.0GiB (13.9GB), run=10056-10179msec 00:22:51.798 00:22:51.798 Disk stats (read/write): 00:22:51.798 nvme0n1: ios=49/12402, merge=0/0, ticks=86/1220560, in_queue=1220646, util=97.62% 00:22:51.798 nvme10n1: ios=45/7078, merge=0/0, ticks=144/1239431, in_queue=1239575, util=97.96% 00:22:51.798 nvme1n1: ios=48/11366, merge=0/0, ticks=83/1223383, in_queue=1223466, util=97.76% 00:22:51.798 nvme2n1: ios=47/11823, merge=0/0, ticks=423/1213550, in_queue=1213973, util=99.78% 00:22:51.798 nvme3n1: ios=46/5418, merge=0/0, ticks=161/1236967, in_queue=1237128, util=98.97% 00:22:51.799 nvme4n1: ios=0/10057, merge=0/0, ticks=0/1239795, in_queue=1239795, util=98.10% 00:22:51.799 nvme5n1: ios=44/7882, merge=0/0, ticks=1389/1209235, in_queue=1210624, util=99.99% 00:22:51.799 nvme6n1: ios=41/9612, merge=0/0, ticks=1174/1237783, in_queue=1238957, util=99.93% 00:22:51.799 nvme7n1: ios=0/12152, merge=0/0, ticks=0/1212697, in_queue=1212697, util=98.74% 00:22:51.799 nvme8n1: ios=0/8837, merge=0/0, ticks=0/1237042, in_queue=1237042, util=98.96% 00:22:51.799 nvme9n1: ios=0/8437, merge=0/0, ticks=0/1247724, in_queue=1247724, util=99.12% 00:22:51.799 01:44:03 -- target/multiconnection.sh@36 -- # sync 00:22:51.799 01:44:03 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:51.799 01:44:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.799 01:44:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:51.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:51.799 01:44:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:51.799 01:44:04 -- common/autotest_common.sh@1198 -- # local i=0 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:51.799 01:44:04 -- common/autotest_common.sh@1210 -- # return 0 00:22:51.799 01:44:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.799 01:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.799 01:44:04 -- common/autotest_common.sh@10 -- # set +x 00:22:51.799 01:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.799 01:44:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.799 01:44:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:51.799 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:51.799 01:44:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:51.799 01:44:04 -- common/autotest_common.sh@1198 -- # local i=0 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:51.799 01:44:04 -- common/autotest_common.sh@1210 -- # return 0 00:22:51.799 01:44:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:51.799 01:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.799 01:44:04 -- common/autotest_common.sh@10 -- # set +x 00:22:51.799 01:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.799 01:44:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.799 01:44:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:51.799 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:51.799 01:44:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:51.799 01:44:04 -- common/autotest_common.sh@1198 -- # local i=0 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:51.799 01:44:04 -- common/autotest_common.sh@1210 -- # return 0 00:22:51.799 01:44:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:51.799 01:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.799 01:44:04 -- common/autotest_common.sh@10 -- # set +x 00:22:51.799 01:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.799 01:44:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.799 01:44:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:51.799 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:51.799 01:44:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:51.799 01:44:04 -- common/autotest_common.sh@1198 -- # local i=0 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:51.799 01:44:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:51.799 01:44:04 -- common/autotest_common.sh@1210 -- # return 0 00:22:51.799 01:44:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:51.799 01:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:51.799 01:44:04 -- common/autotest_common.sh@10 -- # set +x 00:22:51.799 01:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:51.799 01:44:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:51.799 01:44:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:52.057 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:52.057 01:44:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:52.057 01:44:04 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.057 01:44:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.057 01:44:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:52.057 01:44:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.057 01:44:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:52.057 01:44:04 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.057 01:44:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:52.057 01:44:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.057 01:44:04 -- common/autotest_common.sh@10 -- # set +x 00:22:52.057 01:44:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.057 01:44:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.058 01:44:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:52.058 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:52.058 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:52.058 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.058 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.058 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:52.058 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.058 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:52.316 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.316 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:52.316 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.316 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.316 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.316 01:44:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.316 01:44:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:52.316 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:52.316 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:52.316 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.316 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.316 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:52.316 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.316 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:52.316 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.316 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:52.316 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.316 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.316 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.316 01:44:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.317 01:44:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:52.575 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:52.575 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:52.575 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.575 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.575 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:52.575 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.575 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:52.575 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.575 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:52.575 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.575 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.575 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.575 01:44:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.575 01:44:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:52.833 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:52.833 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:52.833 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:52.833 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.833 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:52.833 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.833 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.833 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.833 01:44:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.833 01:44:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:52.833 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:52.833 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:52.833 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:52.833 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.833 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:52.833 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.833 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.833 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.833 01:44:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:52.833 01:44:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:52.833 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:52.833 01:44:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:52.833 01:44:05 -- common/autotest_common.sh@1198 -- # local i=0 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:52.833 01:44:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:52.833 01:44:05 -- common/autotest_common.sh@1210 -- # return 0 00:22:52.833 01:44:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:52.833 01:44:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.833 01:44:05 -- common/autotest_common.sh@10 -- # set +x 00:22:52.833 01:44:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.833 01:44:05 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:52.833 01:44:05 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:52.833 01:44:05 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:52.833 01:44:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:52.833 01:44:05 -- nvmf/common.sh@116 -- # sync 00:22:52.833 01:44:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:52.833 01:44:05 -- nvmf/common.sh@119 -- # set +e 00:22:52.833 01:44:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:52.833 01:44:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:52.833 rmmod nvme_tcp 00:22:52.833 rmmod nvme_fabrics 00:22:52.833 rmmod nvme_keyring 00:22:53.092 01:44:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:53.092 01:44:05 -- nvmf/common.sh@123 -- # set -e 00:22:53.092 01:44:05 -- nvmf/common.sh@124 -- # return 0 00:22:53.092 01:44:05 -- nvmf/common.sh@477 -- # '[' -n 3822166 ']' 00:22:53.092 01:44:05 -- nvmf/common.sh@478 -- # killprocess 3822166 00:22:53.092 01:44:05 -- common/autotest_common.sh@926 -- # '[' -z 3822166 ']' 00:22:53.092 01:44:05 -- common/autotest_common.sh@930 -- # kill -0 3822166 00:22:53.092 01:44:05 -- common/autotest_common.sh@931 -- # uname 00:22:53.092 01:44:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:53.092 01:44:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3822166 00:22:53.092 01:44:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:53.092 01:44:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:53.092 01:44:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3822166' 00:22:53.092 killing process with pid 3822166 00:22:53.092 01:44:05 -- common/autotest_common.sh@945 -- # kill 3822166 00:22:53.092 01:44:05 -- common/autotest_common.sh@950 -- # wait 3822166 00:22:53.662 01:44:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:53.662 01:44:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:53.662 01:44:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:53.662 01:44:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.662 01:44:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:53.662 01:44:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.662 01:44:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:53.662 01:44:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.573 01:44:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:55.573 00:22:55.573 real 1m0.583s 00:22:55.573 user 3m21.260s 00:22:55.573 sys 0m24.823s 00:22:55.573 01:44:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.573 01:44:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.573 ************************************ 00:22:55.573 END TEST nvmf_multiconnection 00:22:55.573 ************************************ 00:22:55.573 01:44:08 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:55.573 01:44:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:55.573 01:44:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:55.573 01:44:08 -- common/autotest_common.sh@10 -- # set +x 00:22:55.573 ************************************ 00:22:55.573 START TEST nvmf_initiator_timeout 00:22:55.573 ************************************ 00:22:55.573 01:44:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:55.573 * Looking for test storage... 00:22:55.573 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:55.573 01:44:08 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.573 01:44:08 -- nvmf/common.sh@7 -- # uname -s 00:22:55.573 01:44:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.573 01:44:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.573 01:44:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.573 01:44:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.573 01:44:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.573 01:44:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.573 01:44:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.573 01:44:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.573 01:44:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.573 01:44:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.573 01:44:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.573 01:44:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.573 01:44:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.573 01:44:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.573 01:44:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.573 01:44:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.573 01:44:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.573 01:44:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.573 01:44:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.573 01:44:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.573 01:44:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.573 01:44:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.573 01:44:08 -- paths/export.sh@5 -- # export PATH 00:22:55.573 01:44:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.573 01:44:08 -- nvmf/common.sh@46 -- # : 0 00:22:55.573 01:44:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:55.573 01:44:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:55.573 01:44:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:55.573 01:44:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.573 01:44:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.573 01:44:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:55.573 01:44:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:55.573 01:44:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:55.573 01:44:08 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:55.573 01:44:08 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:55.573 01:44:08 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:55.573 01:44:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:55.573 01:44:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.573 01:44:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:55.573 01:44:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:55.573 01:44:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:55.573 01:44:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.573 01:44:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.573 01:44:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.573 01:44:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:55.573 01:44:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:55.573 01:44:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:55.573 01:44:08 -- common/autotest_common.sh@10 -- # set +x 00:22:57.480 01:44:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:57.480 01:44:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:57.480 01:44:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:57.480 01:44:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:57.480 01:44:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:57.480 01:44:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:57.480 01:44:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:57.480 01:44:10 -- nvmf/common.sh@294 -- # net_devs=() 00:22:57.480 01:44:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:57.480 01:44:10 -- nvmf/common.sh@295 -- # e810=() 00:22:57.480 01:44:10 -- nvmf/common.sh@295 -- # local -ga e810 00:22:57.480 01:44:10 -- nvmf/common.sh@296 -- # x722=() 00:22:57.480 01:44:10 -- nvmf/common.sh@296 -- # local -ga x722 00:22:57.480 01:44:10 -- nvmf/common.sh@297 -- # mlx=() 00:22:57.480 01:44:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:57.480 01:44:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.480 01:44:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.481 01:44:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.481 01:44:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:57.481 01:44:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:57.481 01:44:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:57.481 01:44:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:57.481 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:57.481 01:44:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:57.481 01:44:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:57.481 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:57.481 01:44:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:57.481 01:44:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.481 01:44:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.481 01:44:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:57.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:57.481 01:44:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.481 01:44:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:57.481 01:44:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.481 01:44:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.481 01:44:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:57.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:57.481 01:44:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.481 01:44:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:57.481 01:44:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:57.481 01:44:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:57.481 01:44:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:57.481 01:44:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:57.481 01:44:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.481 01:44:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:57.481 01:44:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:57.481 01:44:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:57.481 01:44:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:57.481 01:44:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:57.481 01:44:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:57.481 01:44:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:57.481 01:44:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:57.481 01:44:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:57.481 01:44:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:57.740 01:44:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:57.740 01:44:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:57.740 01:44:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:57.740 01:44:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:57.740 01:44:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:57.740 01:44:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:57.740 01:44:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:57.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:22:57.740 00:22:57.740 --- 10.0.0.2 ping statistics --- 00:22:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.740 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:57.740 01:44:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:57.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:22:57.740 00:22:57.740 --- 10.0.0.1 ping statistics --- 00:22:57.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.740 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:22:57.740 01:44:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.740 01:44:10 -- nvmf/common.sh@410 -- # return 0 00:22:57.740 01:44:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:57.740 01:44:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.740 01:44:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:57.740 01:44:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:57.740 01:44:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.740 01:44:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:57.740 01:44:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:57.740 01:44:10 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:57.740 01:44:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:57.740 01:44:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:57.740 01:44:10 -- common/autotest_common.sh@10 -- # set +x 00:22:57.740 01:44:10 -- nvmf/common.sh@469 -- # nvmfpid=3831128 00:22:57.740 01:44:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:57.740 01:44:10 -- nvmf/common.sh@470 -- # waitforlisten 3831128 00:22:57.740 01:44:10 -- common/autotest_common.sh@819 -- # '[' -z 3831128 ']' 00:22:57.740 01:44:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.740 01:44:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:57.740 01:44:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.740 01:44:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:57.740 01:44:10 -- common/autotest_common.sh@10 -- # set +x 00:22:57.740 [2024-07-23 01:44:10.752286] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:22:57.740 [2024-07-23 01:44:10.752370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.740 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.740 [2024-07-23 01:44:10.830082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:58.000 [2024-07-23 01:44:10.924969] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:58.000 [2024-07-23 01:44:10.925103] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.000 [2024-07-23 01:44:10.925120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.000 [2024-07-23 01:44:10.925133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.000 [2024-07-23 01:44:10.925184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.000 [2024-07-23 01:44:10.925212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.000 [2024-07-23 01:44:10.925268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:58.001 [2024-07-23 01:44:10.925271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.938 01:44:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:58.938 01:44:11 -- common/autotest_common.sh@852 -- # return 0 00:22:58.938 01:44:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:58.938 01:44:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 01:44:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 Malloc0 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 Delay0 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 [2024-07-23 01:44:11.820656] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.938 01:44:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:58.938 01:44:11 -- common/autotest_common.sh@10 -- # set +x 00:22:58.938 [2024-07-23 01:44:11.848918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.938 01:44:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:58.938 01:44:11 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:59.510 01:44:12 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:59.510 01:44:12 -- common/autotest_common.sh@1177 -- # local i=0 00:22:59.510 01:44:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:59.510 01:44:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:59.510 01:44:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:01.415 01:44:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:01.415 01:44:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:01.415 01:44:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:01.415 01:44:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:01.415 01:44:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:01.415 01:44:14 -- common/autotest_common.sh@1187 -- # return 0 00:23:01.415 01:44:14 -- target/initiator_timeout.sh@35 -- # fio_pid=3831697 00:23:01.415 01:44:14 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:01.415 01:44:14 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:01.415 [global] 00:23:01.415 thread=1 00:23:01.415 invalidate=1 00:23:01.415 rw=write 00:23:01.415 time_based=1 00:23:01.415 runtime=60 00:23:01.415 ioengine=libaio 00:23:01.415 direct=1 00:23:01.415 bs=4096 00:23:01.415 iodepth=1 00:23:01.415 norandommap=0 00:23:01.415 numjobs=1 00:23:01.415 00:23:01.415 verify_dump=1 00:23:01.415 verify_backlog=512 00:23:01.415 verify_state_save=0 00:23:01.415 do_verify=1 00:23:01.415 verify=crc32c-intel 00:23:01.415 [job0] 00:23:01.415 filename=/dev/nvme0n1 00:23:01.415 Could not set queue depth (nvme0n1) 00:23:01.673 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:01.673 fio-3.35 00:23:01.673 Starting 1 thread 00:23:04.999 01:44:17 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:04.999 01:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.999 01:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 true 00:23:04.999 01:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.999 01:44:17 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:04.999 01:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.999 01:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 true 00:23:04.999 01:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.999 01:44:17 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:04.999 01:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.999 01:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 true 00:23:04.999 01:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.999 01:44:17 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:04.999 01:44:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.999 01:44:17 -- common/autotest_common.sh@10 -- # set +x 00:23:04.999 true 00:23:04.999 01:44:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.999 01:44:17 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:07.534 01:44:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.534 01:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.534 true 00:23:07.534 01:44:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:07.534 01:44:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.534 01:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.534 true 00:23:07.534 01:44:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:07.534 01:44:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.534 01:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.534 true 00:23:07.534 01:44:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:07.534 01:44:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.534 01:44:20 -- common/autotest_common.sh@10 -- # set +x 00:23:07.534 true 00:23:07.534 01:44:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:07.534 01:44:20 -- target/initiator_timeout.sh@54 -- # wait 3831697 00:24:03.780 00:24:03.780 job0: (groupid=0, jobs=1): err= 0: pid=3831772: Tue Jul 23 01:45:14 2024 00:24:03.780 read: IOPS=26, BW=105KiB/s (107kB/s)(6284KiB/60001msec) 00:24:03.780 slat (usec): min=7, max=11480, avg=35.62, stdev=304.80 00:24:03.780 clat (usec): min=423, max=41338k, avg=37726.03, stdev=1042811.36 00:24:03.780 lat (usec): min=439, max=41338k, avg=37761.65, stdev=1042811.09 00:24:03.780 clat percentiles (usec): 00:24:03.780 | 1.00th=[ 449], 5.00th=[ 461], 10.00th=[ 474], 00:24:03.780 | 20.00th=[ 494], 30.00th=[ 515], 40.00th=[ 529], 00:24:03.780 | 50.00th=[ 537], 60.00th=[ 545], 70.00th=[ 562], 00:24:03.780 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:24:03.780 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42730], 00:24:03.780 | 99.95th=[17112761], 99.99th=[17112761] 00:24:03.780 write: IOPS=34, BW=137KiB/s (140kB/s)(8192KiB/60001msec); 0 zone resets 00:24:03.780 slat (nsec): min=7008, max=83645, avg=25143.70, stdev=11543.98 00:24:03.780 clat (usec): min=217, max=636, avg=292.95, stdev=44.58 00:24:03.780 lat (usec): min=226, max=675, avg=318.09, stdev=50.93 00:24:03.780 clat percentiles (usec): 00:24:03.780 | 1.00th=[ 227], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 249], 00:24:03.780 | 30.00th=[ 260], 40.00th=[ 273], 50.00th=[ 285], 60.00th=[ 302], 00:24:03.780 | 70.00th=[ 322], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 375], 00:24:03.780 | 99.00th=[ 392], 99.50th=[ 400], 99.90th=[ 453], 99.95th=[ 474], 00:24:03.780 | 99.99th=[ 635] 00:24:03.780 bw ( KiB/s): min= 648, max= 5608, per=100.00%, avg=3276.80, stdev=1905.84, samples=5 00:24:03.780 iops : min= 162, max= 1402, avg=819.20, stdev=476.46, samples=5 00:24:03.780 lat (usec) : 250=12.66%, 500=54.02%, 750=21.64% 00:24:03.780 lat (msec) : 50=11.66%, >=2000=0.03% 00:24:03.780 cpu : usr=0.07%, sys=0.17%, ctx=3621, majf=0, minf=2 00:24:03.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:03.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.780 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:03.780 issued rwts: total=1571,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:03.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:03.780 00:24:03.780 Run status group 0 (all jobs): 00:24:03.780 READ: bw=105KiB/s (107kB/s), 105KiB/s-105KiB/s (107kB/s-107kB/s), io=6284KiB (6435kB), run=60001-60001msec 00:24:03.780 WRITE: bw=137KiB/s (140kB/s), 137KiB/s-137KiB/s (140kB/s-140kB/s), io=8192KiB (8389kB), run=60001-60001msec 00:24:03.780 00:24:03.780 Disk stats (read/write): 00:24:03.780 nvme0n1: ios=1667/2048, merge=0/0, ticks=17894/560, in_queue=18454, util=99.67% 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:03.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:03.780 01:45:14 -- common/autotest_common.sh@1198 -- # local i=0 00:24:03.780 01:45:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:03.780 01:45:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:03.780 01:45:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:03.780 01:45:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:03.780 01:45:14 -- common/autotest_common.sh@1210 -- # return 0 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:03.780 nvmf hotplug test: fio successful as expected 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:03.780 01:45:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:03.780 01:45:14 -- common/autotest_common.sh@10 -- # set +x 00:24:03.780 01:45:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:03.780 01:45:14 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:03.780 01:45:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:03.780 01:45:14 -- nvmf/common.sh@116 -- # sync 00:24:03.780 01:45:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:03.780 01:45:14 -- nvmf/common.sh@119 -- # set +e 00:24:03.780 01:45:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:03.780 01:45:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:03.780 rmmod nvme_tcp 00:24:03.780 rmmod nvme_fabrics 00:24:03.780 rmmod nvme_keyring 00:24:03.780 01:45:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:03.780 01:45:14 -- nvmf/common.sh@123 -- # set -e 00:24:03.780 01:45:14 -- nvmf/common.sh@124 -- # return 0 00:24:03.780 01:45:14 -- nvmf/common.sh@477 -- # '[' -n 3831128 ']' 00:24:03.780 01:45:14 -- nvmf/common.sh@478 -- # killprocess 3831128 00:24:03.780 01:45:14 -- common/autotest_common.sh@926 -- # '[' -z 3831128 ']' 00:24:03.780 01:45:14 -- common/autotest_common.sh@930 -- # kill -0 3831128 00:24:03.780 01:45:14 -- common/autotest_common.sh@931 -- # uname 00:24:03.780 01:45:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:03.780 01:45:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3831128 00:24:03.780 01:45:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:03.780 01:45:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:03.781 01:45:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3831128' 00:24:03.781 killing process with pid 3831128 00:24:03.781 01:45:14 -- common/autotest_common.sh@945 -- # kill 3831128 00:24:03.781 01:45:14 -- common/autotest_common.sh@950 -- # wait 3831128 00:24:03.781 01:45:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:03.781 01:45:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:03.781 01:45:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:03.781 01:45:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.781 01:45:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:03.781 01:45:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.781 01:45:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.781 01:45:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.349 01:45:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:04.349 00:24:04.349 real 1m8.757s 00:24:04.349 user 4m13.545s 00:24:04.349 sys 0m6.508s 00:24:04.349 01:45:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.349 01:45:17 -- common/autotest_common.sh@10 -- # set +x 00:24:04.349 ************************************ 00:24:04.349 END TEST nvmf_initiator_timeout 00:24:04.349 ************************************ 00:24:04.349 01:45:17 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:04.349 01:45:17 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:04.349 01:45:17 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:04.349 01:45:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:04.349 01:45:17 -- common/autotest_common.sh@10 -- # set +x 00:24:06.251 01:45:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.251 01:45:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:06.251 01:45:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:06.251 01:45:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:06.251 01:45:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:06.251 01:45:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:06.251 01:45:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:06.251 01:45:19 -- nvmf/common.sh@294 -- # net_devs=() 00:24:06.251 01:45:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:06.251 01:45:19 -- nvmf/common.sh@295 -- # e810=() 00:24:06.251 01:45:19 -- nvmf/common.sh@295 -- # local -ga e810 00:24:06.251 01:45:19 -- nvmf/common.sh@296 -- # x722=() 00:24:06.251 01:45:19 -- nvmf/common.sh@296 -- # local -ga x722 00:24:06.251 01:45:19 -- nvmf/common.sh@297 -- # mlx=() 00:24:06.251 01:45:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:06.251 01:45:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.251 01:45:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:06.251 01:45:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:06.251 01:45:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:06.251 01:45:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.251 01:45:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.251 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.251 01:45:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.251 01:45:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.251 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.251 01:45:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:06.251 01:45:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.251 01:45:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.251 01:45:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.251 01:45:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.251 01:45:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.251 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.251 01:45:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.251 01:45:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.251 01:45:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.251 01:45:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.251 01:45:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.251 01:45:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.251 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.251 01:45:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.251 01:45:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:06.251 01:45:19 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.251 01:45:19 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:06.251 01:45:19 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:06.251 01:45:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:06.251 01:45:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:06.251 01:45:19 -- common/autotest_common.sh@10 -- # set +x 00:24:06.251 ************************************ 00:24:06.251 START TEST nvmf_perf_adq 00:24:06.251 ************************************ 00:24:06.251 01:45:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:06.251 * Looking for test storage... 00:24:06.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.251 01:45:19 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.251 01:45:19 -- nvmf/common.sh@7 -- # uname -s 00:24:06.251 01:45:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.251 01:45:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.251 01:45:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.251 01:45:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.251 01:45:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.251 01:45:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.251 01:45:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.251 01:45:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.251 01:45:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.251 01:45:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.251 01:45:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.251 01:45:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:06.251 01:45:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.251 01:45:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.251 01:45:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.251 01:45:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.251 01:45:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.251 01:45:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.251 01:45:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.252 01:45:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.252 01:45:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.252 01:45:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.252 01:45:19 -- paths/export.sh@5 -- # export PATH 00:24:06.252 01:45:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.252 01:45:19 -- nvmf/common.sh@46 -- # : 0 00:24:06.252 01:45:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:06.252 01:45:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:06.252 01:45:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:06.252 01:45:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.252 01:45:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.252 01:45:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:06.252 01:45:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:06.252 01:45:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:06.252 01:45:19 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:06.252 01:45:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:06.252 01:45:19 -- common/autotest_common.sh@10 -- # set +x 00:24:08.156 01:45:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:08.156 01:45:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:08.156 01:45:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:08.156 01:45:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:08.156 01:45:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:08.156 01:45:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:08.156 01:45:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:08.156 01:45:21 -- nvmf/common.sh@294 -- # net_devs=() 00:24:08.156 01:45:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:08.156 01:45:21 -- nvmf/common.sh@295 -- # e810=() 00:24:08.156 01:45:21 -- nvmf/common.sh@295 -- # local -ga e810 00:24:08.156 01:45:21 -- nvmf/common.sh@296 -- # x722=() 00:24:08.156 01:45:21 -- nvmf/common.sh@296 -- # local -ga x722 00:24:08.156 01:45:21 -- nvmf/common.sh@297 -- # mlx=() 00:24:08.156 01:45:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:08.156 01:45:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:08.156 01:45:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:08.156 01:45:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:08.156 01:45:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:08.156 01:45:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.156 01:45:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:08.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:08.156 01:45:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:08.156 01:45:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:08.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:08.156 01:45:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:08.156 01:45:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:08.156 01:45:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.156 01:45:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.156 01:45:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.156 01:45:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.156 01:45:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:08.157 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:08.157 01:45:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.157 01:45:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:08.157 01:45:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:08.157 01:45:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:08.157 01:45:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:08.157 01:45:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:08.157 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:08.157 01:45:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:08.157 01:45:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:08.157 01:45:21 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:08.157 01:45:21 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:08.157 01:45:21 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:08.157 01:45:21 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:08.157 01:45:21 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:08.726 01:45:21 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:10.629 01:45:23 -- target/perf_adq.sh@54 -- # sleep 5 00:24:15.962 01:45:28 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:15.962 01:45:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:15.962 01:45:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:15.962 01:45:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:15.962 01:45:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:15.962 01:45:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:15.962 01:45:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.962 01:45:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.962 01:45:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:15.962 01:45:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:15.962 01:45:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:15.962 01:45:28 -- common/autotest_common.sh@10 -- # set +x 00:24:15.962 01:45:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:15.962 01:45:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:15.962 01:45:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:15.962 01:45:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:15.962 01:45:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:15.962 01:45:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:15.962 01:45:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:15.962 01:45:28 -- nvmf/common.sh@294 -- # net_devs=() 00:24:15.962 01:45:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:15.962 01:45:28 -- nvmf/common.sh@295 -- # e810=() 00:24:15.962 01:45:28 -- nvmf/common.sh@295 -- # local -ga e810 00:24:15.962 01:45:28 -- nvmf/common.sh@296 -- # x722=() 00:24:15.962 01:45:28 -- nvmf/common.sh@296 -- # local -ga x722 00:24:15.962 01:45:28 -- nvmf/common.sh@297 -- # mlx=() 00:24:15.962 01:45:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:15.962 01:45:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.962 01:45:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:15.962 01:45:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:15.962 01:45:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:15.962 01:45:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:15.962 01:45:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:15.962 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:15.962 01:45:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:15.962 01:45:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:15.962 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:15.962 01:45:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:15.962 01:45:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:15.962 01:45:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:15.962 01:45:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.962 01:45:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:15.962 01:45:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.962 01:45:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:15.962 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:15.962 01:45:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.962 01:45:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:15.962 01:45:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.962 01:45:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:15.962 01:45:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.963 01:45:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:15.963 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:15.963 01:45:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.963 01:45:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:15.963 01:45:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:15.963 01:45:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:15.963 01:45:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:15.963 01:45:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:15.963 01:45:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.963 01:45:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.963 01:45:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.963 01:45:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:15.963 01:45:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.963 01:45:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.963 01:45:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:15.963 01:45:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.963 01:45:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.963 01:45:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:15.963 01:45:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:15.963 01:45:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.963 01:45:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.963 01:45:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.963 01:45:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.963 01:45:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:15.963 01:45:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.963 01:45:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.963 01:45:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.963 01:45:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:15.963 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.963 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:24:15.963 00:24:15.963 --- 10.0.0.2 ping statistics --- 00:24:15.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.963 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:15.963 01:45:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.963 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.963 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:15.963 00:24:15.963 --- 10.0.0.1 ping statistics --- 00:24:15.963 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.963 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:15.963 01:45:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.963 01:45:28 -- nvmf/common.sh@410 -- # return 0 00:24:15.963 01:45:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:15.963 01:45:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.963 01:45:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:15.963 01:45:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:15.963 01:45:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.963 01:45:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:15.963 01:45:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:15.963 01:45:28 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:15.963 01:45:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:15.963 01:45:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:15.963 01:45:28 -- common/autotest_common.sh@10 -- # set +x 00:24:15.963 01:45:28 -- nvmf/common.sh@469 -- # nvmfpid=3844193 00:24:15.963 01:45:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:15.963 01:45:28 -- nvmf/common.sh@470 -- # waitforlisten 3844193 00:24:15.963 01:45:28 -- common/autotest_common.sh@819 -- # '[' -z 3844193 ']' 00:24:15.963 01:45:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.963 01:45:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:15.963 01:45:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.963 01:45:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:15.963 01:45:28 -- common/autotest_common.sh@10 -- # set +x 00:24:15.963 [2024-07-23 01:45:28.909974] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:15.963 [2024-07-23 01:45:28.910041] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.963 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.963 [2024-07-23 01:45:28.974944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:16.219 [2024-07-23 01:45:29.064971] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:16.219 [2024-07-23 01:45:29.065116] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.219 [2024-07-23 01:45:29.065134] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.220 [2024-07-23 01:45:29.065146] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.220 [2024-07-23 01:45:29.065207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:16.220 [2024-07-23 01:45:29.065264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.220 [2024-07-23 01:45:29.065291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.220 [2024-07-23 01:45:29.065293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.220 01:45:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:16.220 01:45:29 -- common/autotest_common.sh@852 -- # return 0 00:24:16.220 01:45:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:16.220 01:45:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 01:45:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.220 01:45:29 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:16.220 01:45:29 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.220 01:45:29 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.220 01:45:29 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 [2024-07-23 01:45:29.277114] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.220 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.220 01:45:29 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 Malloc1 00:24:16.220 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.220 01:45:29 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.220 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.220 01:45:29 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:16.220 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.220 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.477 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.477 01:45:29 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.477 01:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:16.477 01:45:29 -- common/autotest_common.sh@10 -- # set +x 00:24:16.477 [2024-07-23 01:45:29.327818] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.477 01:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:16.477 01:45:29 -- target/perf_adq.sh@73 -- # perfpid=3844226 00:24:16.477 01:45:29 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:16.477 01:45:29 -- target/perf_adq.sh@74 -- # sleep 2 00:24:16.477 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.387 01:45:31 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:18.387 01:45:31 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:18.387 01:45:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.387 01:45:31 -- target/perf_adq.sh@76 -- # wc -l 00:24:18.387 01:45:31 -- common/autotest_common.sh@10 -- # set +x 00:24:18.387 01:45:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.387 01:45:31 -- target/perf_adq.sh@76 -- # count=4 00:24:18.387 01:45:31 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:18.387 01:45:31 -- target/perf_adq.sh@81 -- # wait 3844226 00:24:26.518 Initializing NVMe Controllers 00:24:26.518 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:26.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:26.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:26.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:26.518 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:26.518 Initialization complete. Launching workers. 00:24:26.518 ======================================================== 00:24:26.518 Latency(us) 00:24:26.518 Device Information : IOPS MiB/s Average min max 00:24:26.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11119.56 43.44 5768.68 1039.90 45520.07 00:24:26.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11578.36 45.23 5529.26 1303.78 8746.24 00:24:26.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10293.97 40.21 6218.98 1314.07 9846.14 00:24:26.518 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10493.27 40.99 6098.67 1261.88 10602.19 00:24:26.518 ======================================================== 00:24:26.518 Total : 43485.16 169.86 5891.16 1039.90 45520.07 00:24:26.518 00:24:26.518 01:45:39 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:26.518 01:45:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:26.518 01:45:39 -- nvmf/common.sh@116 -- # sync 00:24:26.518 01:45:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:26.518 01:45:39 -- nvmf/common.sh@119 -- # set +e 00:24:26.518 01:45:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:26.518 01:45:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:26.518 rmmod nvme_tcp 00:24:26.518 rmmod nvme_fabrics 00:24:26.518 rmmod nvme_keyring 00:24:26.518 01:45:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:26.518 01:45:39 -- nvmf/common.sh@123 -- # set -e 00:24:26.518 01:45:39 -- nvmf/common.sh@124 -- # return 0 00:24:26.518 01:45:39 -- nvmf/common.sh@477 -- # '[' -n 3844193 ']' 00:24:26.518 01:45:39 -- nvmf/common.sh@478 -- # killprocess 3844193 00:24:26.518 01:45:39 -- common/autotest_common.sh@926 -- # '[' -z 3844193 ']' 00:24:26.518 01:45:39 -- common/autotest_common.sh@930 -- # kill -0 3844193 00:24:26.518 01:45:39 -- common/autotest_common.sh@931 -- # uname 00:24:26.518 01:45:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:26.518 01:45:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3844193 00:24:26.518 01:45:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:26.518 01:45:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:26.518 01:45:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3844193' 00:24:26.518 killing process with pid 3844193 00:24:26.518 01:45:39 -- common/autotest_common.sh@945 -- # kill 3844193 00:24:26.518 01:45:39 -- common/autotest_common.sh@950 -- # wait 3844193 00:24:26.777 01:45:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:26.777 01:45:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:26.777 01:45:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:26.777 01:45:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:26.777 01:45:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:26.777 01:45:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:26.777 01:45:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:26.777 01:45:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:29.315 01:45:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:29.315 01:45:41 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:29.315 01:45:41 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:29.575 01:45:42 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:31.480 01:45:44 -- target/perf_adq.sh@54 -- # sleep 5 00:24:36.758 01:45:49 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:36.758 01:45:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:36.758 01:45:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.758 01:45:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:36.758 01:45:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:36.758 01:45:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:36.758 01:45:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.758 01:45:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.758 01:45:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.758 01:45:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:36.758 01:45:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:36.758 01:45:49 -- common/autotest_common.sh@10 -- # set +x 00:24:36.758 01:45:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:36.758 01:45:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:36.758 01:45:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:36.758 01:45:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:36.758 01:45:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:36.758 01:45:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:36.758 01:45:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:36.758 01:45:49 -- nvmf/common.sh@294 -- # net_devs=() 00:24:36.758 01:45:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:36.758 01:45:49 -- nvmf/common.sh@295 -- # e810=() 00:24:36.758 01:45:49 -- nvmf/common.sh@295 -- # local -ga e810 00:24:36.758 01:45:49 -- nvmf/common.sh@296 -- # x722=() 00:24:36.758 01:45:49 -- nvmf/common.sh@296 -- # local -ga x722 00:24:36.758 01:45:49 -- nvmf/common.sh@297 -- # mlx=() 00:24:36.758 01:45:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:36.758 01:45:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:36.758 01:45:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:36.758 01:45:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:36.758 01:45:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:36.758 01:45:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.758 01:45:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:36.758 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:36.758 01:45:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:36.758 01:45:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:36.758 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:36.758 01:45:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:36.758 01:45:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.758 01:45:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.758 01:45:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.758 01:45:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.758 01:45:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:36.758 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:36.758 01:45:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.758 01:45:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:36.758 01:45:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:36.758 01:45:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:36.758 01:45:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:36.758 01:45:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:36.758 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:36.758 01:45:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:36.758 01:45:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:36.758 01:45:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:36.758 01:45:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:36.758 01:45:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:36.759 01:45:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:36.759 01:45:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.759 01:45:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:36.759 01:45:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:36.759 01:45:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:36.759 01:45:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:36.759 01:45:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:36.759 01:45:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:36.759 01:45:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:36.759 01:45:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.759 01:45:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:36.759 01:45:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:36.759 01:45:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:36.759 01:45:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:36.759 01:45:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:36.759 01:45:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:36.759 01:45:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:36.759 01:45:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:36.759 01:45:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:36.759 01:45:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:36.759 01:45:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:36.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:24:36.759 00:24:36.759 --- 10.0.0.2 ping statistics --- 00:24:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.759 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:24:36.759 01:45:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:36.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:24:36.759 00:24:36.759 --- 10.0.0.1 ping statistics --- 00:24:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.759 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:24:36.759 01:45:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.759 01:45:49 -- nvmf/common.sh@410 -- # return 0 00:24:36.759 01:45:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:36.759 01:45:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:36.759 01:45:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:36.759 01:45:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:36.759 01:45:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:36.759 01:45:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:36.759 01:45:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:36.759 01:45:49 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:36.759 01:45:49 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:36.759 01:45:49 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:36.759 01:45:49 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:36.759 net.core.busy_poll = 1 00:24:36.759 01:45:49 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:36.759 net.core.busy_read = 1 00:24:36.759 01:45:49 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:36.759 01:45:49 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:36.759 01:45:49 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:36.759 01:45:49 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:36.759 01:45:49 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:36.759 01:45:49 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:36.759 01:45:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:36.759 01:45:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:36.759 01:45:49 -- common/autotest_common.sh@10 -- # set +x 00:24:36.759 01:45:49 -- nvmf/common.sh@469 -- # nvmfpid=3846910 00:24:36.759 01:45:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:36.759 01:45:49 -- nvmf/common.sh@470 -- # waitforlisten 3846910 00:24:36.759 01:45:49 -- common/autotest_common.sh@819 -- # '[' -z 3846910 ']' 00:24:36.759 01:45:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.759 01:45:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:36.759 01:45:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.759 01:45:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:36.759 01:45:49 -- common/autotest_common.sh@10 -- # set +x 00:24:36.759 [2024-07-23 01:45:49.842260] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:36.759 [2024-07-23 01:45:49.842353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.017 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.017 [2024-07-23 01:45:49.908129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.017 [2024-07-23 01:45:49.994308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:37.017 [2024-07-23 01:45:49.994463] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.017 [2024-07-23 01:45:49.994480] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.017 [2024-07-23 01:45:49.994492] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.017 [2024-07-23 01:45:49.994545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.017 [2024-07-23 01:45:49.994638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.017 [2024-07-23 01:45:49.994673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.017 [2024-07-23 01:45:49.994675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.017 01:45:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:37.017 01:45:50 -- common/autotest_common.sh@852 -- # return 0 00:24:37.017 01:45:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:37.017 01:45:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:37.017 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.017 01:45:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.017 01:45:50 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:37.017 01:45:50 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:37.017 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.017 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.017 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.017 01:45:50 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:37.017 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.017 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:37.275 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.275 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 [2024-07-23 01:45:50.181214] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:37.275 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.275 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 Malloc1 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.275 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.275 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:37.275 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.275 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.275 01:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.275 01:45:50 -- common/autotest_common.sh@10 -- # set +x 00:24:37.275 [2024-07-23 01:45:50.232399] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.275 01:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.275 01:45:50 -- target/perf_adq.sh@94 -- # perfpid=3847060 00:24:37.275 01:45:50 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:37.275 01:45:50 -- target/perf_adq.sh@95 -- # sleep 2 00:24:37.275 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.182 01:45:52 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:39.182 01:45:52 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:39.182 01:45:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:39.182 01:45:52 -- common/autotest_common.sh@10 -- # set +x 00:24:39.182 01:45:52 -- target/perf_adq.sh@97 -- # wc -l 00:24:39.182 01:45:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:39.182 01:45:52 -- target/perf_adq.sh@97 -- # count=2 00:24:39.182 01:45:52 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:39.182 01:45:52 -- target/perf_adq.sh@103 -- # wait 3847060 00:24:47.346 Initializing NVMe Controllers 00:24:47.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:47.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:47.346 Initialization complete. Launching workers. 00:24:47.346 ======================================================== 00:24:47.346 Latency(us) 00:24:47.346 Device Information : IOPS MiB/s Average min max 00:24:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6883.30 26.89 9328.98 1519.19 55295.13 00:24:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6989.70 27.30 9159.10 1744.90 53425.44 00:24:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7895.60 30.84 8105.76 1308.85 53167.20 00:24:47.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7705.40 30.10 8308.89 1418.72 53071.12 00:24:47.346 ======================================================== 00:24:47.346 Total : 29474.00 115.13 8694.33 1308.85 55295.13 00:24:47.346 00:24:47.346 01:46:00 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:47.346 01:46:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:47.346 01:46:00 -- nvmf/common.sh@116 -- # sync 00:24:47.346 01:46:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:47.346 01:46:00 -- nvmf/common.sh@119 -- # set +e 00:24:47.346 01:46:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:47.346 01:46:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:47.346 rmmod nvme_tcp 00:24:47.346 rmmod nvme_fabrics 00:24:47.605 rmmod nvme_keyring 00:24:47.605 01:46:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:47.605 01:46:00 -- nvmf/common.sh@123 -- # set -e 00:24:47.605 01:46:00 -- nvmf/common.sh@124 -- # return 0 00:24:47.605 01:46:00 -- nvmf/common.sh@477 -- # '[' -n 3846910 ']' 00:24:47.605 01:46:00 -- nvmf/common.sh@478 -- # killprocess 3846910 00:24:47.605 01:46:00 -- common/autotest_common.sh@926 -- # '[' -z 3846910 ']' 00:24:47.605 01:46:00 -- common/autotest_common.sh@930 -- # kill -0 3846910 00:24:47.605 01:46:00 -- common/autotest_common.sh@931 -- # uname 00:24:47.605 01:46:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:47.605 01:46:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3846910 00:24:47.605 01:46:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:47.605 01:46:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:47.605 01:46:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3846910' 00:24:47.605 killing process with pid 3846910 00:24:47.605 01:46:00 -- common/autotest_common.sh@945 -- # kill 3846910 00:24:47.605 01:46:00 -- common/autotest_common.sh@950 -- # wait 3846910 00:24:47.865 01:46:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:47.865 01:46:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:47.865 01:46:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:47.865 01:46:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.865 01:46:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:47.865 01:46:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.865 01:46:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.865 01:46:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.765 01:46:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:49.765 01:46:02 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:49.765 00:24:49.765 real 0m43.576s 00:24:49.765 user 2m35.036s 00:24:49.765 sys 0m11.251s 00:24:49.765 01:46:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.765 01:46:02 -- common/autotest_common.sh@10 -- # set +x 00:24:49.765 ************************************ 00:24:49.765 END TEST nvmf_perf_adq 00:24:49.765 ************************************ 00:24:49.765 01:46:02 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:49.765 01:46:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:49.765 01:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:49.765 01:46:02 -- common/autotest_common.sh@10 -- # set +x 00:24:49.765 ************************************ 00:24:49.765 START TEST nvmf_shutdown 00:24:49.765 ************************************ 00:24:49.765 01:46:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:49.765 * Looking for test storage... 00:24:49.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:49.765 01:46:02 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.765 01:46:02 -- nvmf/common.sh@7 -- # uname -s 00:24:49.765 01:46:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.765 01:46:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.765 01:46:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.765 01:46:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.765 01:46:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.765 01:46:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.765 01:46:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.765 01:46:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.765 01:46:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.765 01:46:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.765 01:46:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.765 01:46:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:49.765 01:46:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.765 01:46:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.765 01:46:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.765 01:46:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.765 01:46:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.765 01:46:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.765 01:46:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.765 01:46:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.765 01:46:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.765 01:46:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.765 01:46:02 -- paths/export.sh@5 -- # export PATH 00:24:49.765 01:46:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.765 01:46:02 -- nvmf/common.sh@46 -- # : 0 00:24:49.765 01:46:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:49.765 01:46:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:49.765 01:46:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:49.765 01:46:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.765 01:46:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.765 01:46:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:49.765 01:46:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:49.765 01:46:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:49.765 01:46:02 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.765 01:46:02 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.765 01:46:02 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:49.765 01:46:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:49.765 01:46:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:49.765 01:46:02 -- common/autotest_common.sh@10 -- # set +x 00:24:50.025 ************************************ 00:24:50.025 START TEST nvmf_shutdown_tc1 00:24:50.025 ************************************ 00:24:50.025 01:46:02 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:50.025 01:46:02 -- target/shutdown.sh@74 -- # starttarget 00:24:50.025 01:46:02 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:50.025 01:46:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:50.025 01:46:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.025 01:46:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:50.025 01:46:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:50.025 01:46:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:50.025 01:46:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.025 01:46:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.025 01:46:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.025 01:46:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:50.025 01:46:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:50.025 01:46:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:50.025 01:46:02 -- common/autotest_common.sh@10 -- # set +x 00:24:51.930 01:46:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:51.930 01:46:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:51.930 01:46:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:51.930 01:46:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:51.930 01:46:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:51.930 01:46:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:51.930 01:46:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:51.930 01:46:04 -- nvmf/common.sh@294 -- # net_devs=() 00:24:51.930 01:46:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:51.930 01:46:04 -- nvmf/common.sh@295 -- # e810=() 00:24:51.930 01:46:04 -- nvmf/common.sh@295 -- # local -ga e810 00:24:51.930 01:46:04 -- nvmf/common.sh@296 -- # x722=() 00:24:51.930 01:46:04 -- nvmf/common.sh@296 -- # local -ga x722 00:24:51.930 01:46:04 -- nvmf/common.sh@297 -- # mlx=() 00:24:51.930 01:46:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:51.930 01:46:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:51.930 01:46:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:51.930 01:46:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:51.930 01:46:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:51.930 01:46:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:51.931 01:46:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.931 01:46:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:51.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:51.931 01:46:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:51.931 01:46:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:51.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:51.931 01:46:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.931 01:46:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.931 01:46:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.931 01:46:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:51.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:51.931 01:46:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:51.931 01:46:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:51.931 01:46:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:51.931 01:46:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:51.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:51.931 01:46:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:51.931 01:46:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:51.931 01:46:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.931 01:46:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.931 01:46:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:51.931 01:46:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:51.931 01:46:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:51.931 01:46:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:51.931 01:46:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:51.931 01:46:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.931 01:46:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:51.931 01:46:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:51.931 01:46:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:51.931 01:46:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:51.931 01:46:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:51.931 01:46:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:51.931 01:46:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:51.931 01:46:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:51.931 01:46:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:51.931 01:46:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:51.931 01:46:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:51.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:24:51.931 00:24:51.931 --- 10.0.0.2 ping statistics --- 00:24:51.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.931 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:24:51.931 01:46:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:51.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:24:51.931 00:24:51.931 --- 10.0.0.1 ping statistics --- 00:24:51.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.931 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:24:51.931 01:46:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.931 01:46:04 -- nvmf/common.sh@410 -- # return 0 00:24:51.931 01:46:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:51.931 01:46:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.931 01:46:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:51.931 01:46:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.931 01:46:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:51.931 01:46:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:51.931 01:46:04 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:51.931 01:46:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:51.931 01:46:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:51.931 01:46:04 -- common/autotest_common.sh@10 -- # set +x 00:24:51.931 01:46:04 -- nvmf/common.sh@469 -- # nvmfpid=3850265 00:24:51.931 01:46:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:51.931 01:46:04 -- nvmf/common.sh@470 -- # waitforlisten 3850265 00:24:51.931 01:46:04 -- common/autotest_common.sh@819 -- # '[' -z 3850265 ']' 00:24:51.931 01:46:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.931 01:46:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:51.931 01:46:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.931 01:46:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:51.931 01:46:04 -- common/autotest_common.sh@10 -- # set +x 00:24:51.931 [2024-07-23 01:46:04.996218] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:51.931 [2024-07-23 01:46:04.996306] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.190 EAL: No free 2048 kB hugepages reported on node 1 00:24:52.190 [2024-07-23 01:46:05.062545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:52.190 [2024-07-23 01:46:05.151268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:52.190 [2024-07-23 01:46:05.151440] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.190 [2024-07-23 01:46:05.151457] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.190 [2024-07-23 01:46:05.151469] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.190 [2024-07-23 01:46:05.151757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.190 [2024-07-23 01:46:05.151821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:52.190 [2024-07-23 01:46:05.151888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:52.190 [2024-07-23 01:46:05.151890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.126 01:46:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:53.126 01:46:05 -- common/autotest_common.sh@852 -- # return 0 00:24:53.126 01:46:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:53.126 01:46:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:53.126 01:46:05 -- common/autotest_common.sh@10 -- # set +x 00:24:53.126 01:46:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.126 01:46:05 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.126 01:46:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.126 01:46:05 -- common/autotest_common.sh@10 -- # set +x 00:24:53.126 [2024-07-23 01:46:05.974196] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.126 01:46:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.126 01:46:05 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:53.126 01:46:05 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:53.126 01:46:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:53.126 01:46:05 -- common/autotest_common.sh@10 -- # set +x 00:24:53.126 01:46:05 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:05 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:05 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:06 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:06 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:53.126 01:46:06 -- target/shutdown.sh@28 -- # cat 00:24:53.126 01:46:06 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:53.126 01:46:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:53.126 01:46:06 -- common/autotest_common.sh@10 -- # set +x 00:24:53.126 Malloc1 00:24:53.126 [2024-07-23 01:46:06.049805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:53.126 Malloc2 00:24:53.126 Malloc3 00:24:53.126 Malloc4 00:24:53.126 Malloc5 00:24:53.386 Malloc6 00:24:53.386 Malloc7 00:24:53.386 Malloc8 00:24:53.386 Malloc9 00:24:53.386 Malloc10 00:24:53.646 01:46:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:53.646 01:46:06 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:53.646 01:46:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:53.646 01:46:06 -- common/autotest_common.sh@10 -- # set +x 00:24:53.646 01:46:06 -- target/shutdown.sh@78 -- # perfpid=3850455 00:24:53.646 01:46:06 -- target/shutdown.sh@79 -- # waitforlisten 3850455 /var/tmp/bdevperf.sock 00:24:53.646 01:46:06 -- common/autotest_common.sh@819 -- # '[' -z 3850455 ']' 00:24:53.646 01:46:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:53.646 01:46:06 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:53.646 01:46:06 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:53.646 01:46:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:53.646 01:46:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:53.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:53.646 01:46:06 -- nvmf/common.sh@520 -- # config=() 00:24:53.646 01:46:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:53.646 01:46:06 -- nvmf/common.sh@520 -- # local subsystem config 00:24:53.646 01:46:06 -- common/autotest_common.sh@10 -- # set +x 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.646 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.646 { 00:24:53.646 "params": { 00:24:53.646 "name": "Nvme$subsystem", 00:24:53.646 "trtype": "$TEST_TRANSPORT", 00:24:53.646 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.646 "adrfam": "ipv4", 00:24:53.646 "trsvcid": "$NVMF_PORT", 00:24:53.646 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.646 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.646 "hdgst": ${hdgst:-false}, 00:24:53.646 "ddgst": ${ddgst:-false} 00:24:53.646 }, 00:24:53.646 "method": "bdev_nvme_attach_controller" 00:24:53.646 } 00:24:53.646 EOF 00:24:53.646 )") 00:24:53.646 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.647 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.647 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.647 { 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme$subsystem", 00:24:53.647 "trtype": "$TEST_TRANSPORT", 00:24:53.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "$NVMF_PORT", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.647 "hdgst": ${hdgst:-false}, 00:24:53.647 "ddgst": ${ddgst:-false} 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 } 00:24:53.647 EOF 00:24:53.647 )") 00:24:53.647 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.647 01:46:06 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:53.647 01:46:06 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:53.647 { 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme$subsystem", 00:24:53.647 "trtype": "$TEST_TRANSPORT", 00:24:53.647 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "$NVMF_PORT", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:53.647 "hdgst": ${hdgst:-false}, 00:24:53.647 "ddgst": ${ddgst:-false} 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 } 00:24:53.647 EOF 00:24:53.647 )") 00:24:53.647 01:46:06 -- nvmf/common.sh@542 -- # cat 00:24:53.647 01:46:06 -- nvmf/common.sh@544 -- # jq . 00:24:53.647 01:46:06 -- nvmf/common.sh@545 -- # IFS=, 00:24:53.647 01:46:06 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme1", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme2", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme3", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme4", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme5", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme6", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme7", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme8", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme9", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 },{ 00:24:53.647 "params": { 00:24:53.647 "name": "Nvme10", 00:24:53.647 "trtype": "tcp", 00:24:53.647 "traddr": "10.0.0.2", 00:24:53.647 "adrfam": "ipv4", 00:24:53.647 "trsvcid": "4420", 00:24:53.647 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:53.647 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:53.647 "hdgst": false, 00:24:53.647 "ddgst": false 00:24:53.647 }, 00:24:53.647 "method": "bdev_nvme_attach_controller" 00:24:53.647 }' 00:24:53.647 [2024-07-23 01:46:06.568305] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:53.647 [2024-07-23 01:46:06.568376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:53.647 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.647 [2024-07-23 01:46:06.631522] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.647 [2024-07-23 01:46:06.716240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.547 01:46:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:55.547 01:46:08 -- common/autotest_common.sh@852 -- # return 0 00:24:55.547 01:46:08 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:55.547 01:46:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:55.547 01:46:08 -- common/autotest_common.sh@10 -- # set +x 00:24:55.547 01:46:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:55.547 01:46:08 -- target/shutdown.sh@83 -- # kill -9 3850455 00:24:55.547 01:46:08 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:55.547 01:46:08 -- target/shutdown.sh@87 -- # sleep 1 00:24:56.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3850455 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:56.483 01:46:09 -- target/shutdown.sh@88 -- # kill -0 3850265 00:24:56.483 01:46:09 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:56.483 01:46:09 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.483 01:46:09 -- nvmf/common.sh@520 -- # config=() 00:24:56.483 01:46:09 -- nvmf/common.sh@520 -- # local subsystem config 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.483 "method": "bdev_nvme_attach_controller" 00:24:56.483 } 00:24:56.483 EOF 00:24:56.483 )") 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.483 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.483 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.483 { 00:24:56.483 "params": { 00:24:56.483 "name": "Nvme$subsystem", 00:24:56.483 "trtype": "$TEST_TRANSPORT", 00:24:56.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.483 "adrfam": "ipv4", 00:24:56.483 "trsvcid": "$NVMF_PORT", 00:24:56.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.483 "hdgst": ${hdgst:-false}, 00:24:56.483 "ddgst": ${ddgst:-false} 00:24:56.483 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 } 00:24:56.484 EOF 00:24:56.484 )") 00:24:56.484 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.484 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.484 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.484 { 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme$subsystem", 00:24:56.484 "trtype": "$TEST_TRANSPORT", 00:24:56.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "$NVMF_PORT", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.484 "hdgst": ${hdgst:-false}, 00:24:56.484 "ddgst": ${ddgst:-false} 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 } 00:24:56.484 EOF 00:24:56.484 )") 00:24:56.484 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.484 01:46:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.484 01:46:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.484 { 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme$subsystem", 00:24:56.484 "trtype": "$TEST_TRANSPORT", 00:24:56.484 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "$NVMF_PORT", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.484 "hdgst": ${hdgst:-false}, 00:24:56.484 "ddgst": ${ddgst:-false} 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 } 00:24:56.484 EOF 00:24:56.484 )") 00:24:56.484 01:46:09 -- nvmf/common.sh@542 -- # cat 00:24:56.484 01:46:09 -- nvmf/common.sh@544 -- # jq . 00:24:56.484 01:46:09 -- nvmf/common.sh@545 -- # IFS=, 00:24:56.484 01:46:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme1", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme2", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme3", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme4", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme5", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme6", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme7", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme8", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme9", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 },{ 00:24:56.484 "params": { 00:24:56.484 "name": "Nvme10", 00:24:56.484 "trtype": "tcp", 00:24:56.484 "traddr": "10.0.0.2", 00:24:56.484 "adrfam": "ipv4", 00:24:56.484 "trsvcid": "4420", 00:24:56.484 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:56.484 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:56.484 "hdgst": false, 00:24:56.484 "ddgst": false 00:24:56.484 }, 00:24:56.484 "method": "bdev_nvme_attach_controller" 00:24:56.484 }' 00:24:56.484 [2024-07-23 01:46:09.274547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:56.484 [2024-07-23 01:46:09.274659] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3850852 ] 00:24:56.484 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.484 [2024-07-23 01:46:09.341260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.484 [2024-07-23 01:46:09.425824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.876 Running I/O for 1 seconds... 00:24:59.253 00:24:59.253 Latency(us) 00:24:59.253 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:59.253 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme1n1 : 1.14 384.94 24.06 0.00 0.00 157890.78 19806.44 146800.64 00:24:59.253 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme2n1 : 1.09 337.76 21.11 0.00 0.00 179249.40 17087.91 160004.93 00:24:59.253 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme3n1 : 1.12 390.65 24.42 0.00 0.00 153709.43 19418.07 132042.90 00:24:59.253 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme4n1 : 1.10 440.28 27.52 0.00 0.00 139921.71 9709.04 120392.06 00:24:59.253 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme5n1 : 1.15 420.07 26.25 0.00 0.00 141422.78 17087.91 112624.83 00:24:59.253 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme6n1 : 1.08 327.31 20.46 0.00 0.00 182572.83 40001.23 178646.28 00:24:59.253 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme7n1 : 1.15 418.14 26.13 0.00 0.00 140125.48 12913.02 116508.44 00:24:59.253 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme8n1 : 1.11 436.51 27.28 0.00 0.00 137630.22 15825.73 117285.17 00:24:59.253 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme9n1 : 1.11 433.80 27.11 0.00 0.00 137994.01 11408.12 120392.06 00:24:59.253 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:59.253 Verification LBA range: start 0x0 length 0x400 00:24:59.253 Nvme10n1 : 1.14 311.74 19.48 0.00 0.00 182163.14 26214.40 168548.88 00:24:59.253 =================================================================================================================== 00:24:59.253 Total : 3901.20 243.82 0.00 0.00 153031.97 9709.04 178646.28 00:24:59.253 01:46:12 -- target/shutdown.sh@93 -- # stoptarget 00:24:59.253 01:46:12 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:59.253 01:46:12 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:59.253 01:46:12 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:59.253 01:46:12 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:59.253 01:46:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:59.253 01:46:12 -- nvmf/common.sh@116 -- # sync 00:24:59.253 01:46:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:59.253 01:46:12 -- nvmf/common.sh@119 -- # set +e 00:24:59.253 01:46:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:59.253 01:46:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:59.253 rmmod nvme_tcp 00:24:59.253 rmmod nvme_fabrics 00:24:59.253 rmmod nvme_keyring 00:24:59.253 01:46:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:59.253 01:46:12 -- nvmf/common.sh@123 -- # set -e 00:24:59.253 01:46:12 -- nvmf/common.sh@124 -- # return 0 00:24:59.253 01:46:12 -- nvmf/common.sh@477 -- # '[' -n 3850265 ']' 00:24:59.253 01:46:12 -- nvmf/common.sh@478 -- # killprocess 3850265 00:24:59.253 01:46:12 -- common/autotest_common.sh@926 -- # '[' -z 3850265 ']' 00:24:59.253 01:46:12 -- common/autotest_common.sh@930 -- # kill -0 3850265 00:24:59.253 01:46:12 -- common/autotest_common.sh@931 -- # uname 00:24:59.253 01:46:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:59.253 01:46:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3850265 00:24:59.511 01:46:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:59.511 01:46:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:59.511 01:46:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3850265' 00:24:59.511 killing process with pid 3850265 00:24:59.511 01:46:12 -- common/autotest_common.sh@945 -- # kill 3850265 00:24:59.511 01:46:12 -- common/autotest_common.sh@950 -- # wait 3850265 00:24:59.770 01:46:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:59.770 01:46:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:59.770 01:46:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:59.770 01:46:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.770 01:46:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:59.770 01:46:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.770 01:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:59.770 01:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.312 01:46:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:02.312 00:25:02.312 real 0m11.995s 00:25:02.312 user 0m34.951s 00:25:02.312 sys 0m3.246s 00:25:02.312 01:46:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:02.312 01:46:14 -- common/autotest_common.sh@10 -- # set +x 00:25:02.312 ************************************ 00:25:02.312 END TEST nvmf_shutdown_tc1 00:25:02.312 ************************************ 00:25:02.312 01:46:14 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:02.312 01:46:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:02.312 01:46:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:02.312 01:46:14 -- common/autotest_common.sh@10 -- # set +x 00:25:02.312 ************************************ 00:25:02.312 START TEST nvmf_shutdown_tc2 00:25:02.312 ************************************ 00:25:02.312 01:46:14 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:02.313 01:46:14 -- target/shutdown.sh@98 -- # starttarget 00:25:02.313 01:46:14 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:02.313 01:46:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:02.313 01:46:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:02.313 01:46:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:02.313 01:46:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:02.313 01:46:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:02.313 01:46:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:02.313 01:46:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:02.313 01:46:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:02.313 01:46:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:02.313 01:46:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:02.313 01:46:14 -- common/autotest_common.sh@10 -- # set +x 00:25:02.313 01:46:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:02.313 01:46:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:02.313 01:46:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:02.313 01:46:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:02.313 01:46:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:02.313 01:46:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:02.313 01:46:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:02.313 01:46:14 -- nvmf/common.sh@294 -- # net_devs=() 00:25:02.313 01:46:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:02.313 01:46:14 -- nvmf/common.sh@295 -- # e810=() 00:25:02.313 01:46:14 -- nvmf/common.sh@295 -- # local -ga e810 00:25:02.313 01:46:14 -- nvmf/common.sh@296 -- # x722=() 00:25:02.313 01:46:14 -- nvmf/common.sh@296 -- # local -ga x722 00:25:02.313 01:46:14 -- nvmf/common.sh@297 -- # mlx=() 00:25:02.313 01:46:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:02.313 01:46:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.313 01:46:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:02.313 01:46:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:02.313 01:46:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.313 01:46:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:02.313 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:02.313 01:46:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.313 01:46:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:02.313 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:02.313 01:46:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.313 01:46:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.313 01:46:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.313 01:46:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:02.313 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:02.313 01:46:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.313 01:46:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.313 01:46:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.313 01:46:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.313 01:46:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:02.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:02.313 01:46:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.313 01:46:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:02.313 01:46:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:02.313 01:46:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:02.313 01:46:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.313 01:46:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.313 01:46:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.313 01:46:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:02.313 01:46:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.313 01:46:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.313 01:46:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:02.313 01:46:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.313 01:46:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.313 01:46:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:02.313 01:46:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:02.313 01:46:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.313 01:46:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.313 01:46:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.313 01:46:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.313 01:46:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:02.313 01:46:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.313 01:46:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.313 01:46:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.313 01:46:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:25:02.313 00:25:02.313 --- 10.0.0.2 ping statistics --- 00:25:02.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.313 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:02.313 01:46:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:02.313 00:25:02.313 --- 10.0.0.1 ping statistics --- 00:25:02.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.313 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:02.313 01:46:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.313 01:46:15 -- nvmf/common.sh@410 -- # return 0 00:25:02.313 01:46:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.313 01:46:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.313 01:46:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.313 01:46:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.313 01:46:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.313 01:46:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.313 01:46:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.313 01:46:15 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:02.313 01:46:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.313 01:46:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.313 01:46:15 -- common/autotest_common.sh@10 -- # set +x 00:25:02.313 01:46:15 -- nvmf/common.sh@469 -- # nvmfpid=3851670 00:25:02.313 01:46:15 -- nvmf/common.sh@470 -- # waitforlisten 3851670 00:25:02.313 01:46:15 -- common/autotest_common.sh@819 -- # '[' -z 3851670 ']' 00:25:02.313 01:46:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.313 01:46:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.313 01:46:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.313 01:46:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.313 01:46:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.313 01:46:15 -- common/autotest_common.sh@10 -- # set +x 00:25:02.313 [2024-07-23 01:46:15.100355] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:02.313 [2024-07-23 01:46:15.100438] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.313 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.313 [2024-07-23 01:46:15.169383] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.314 [2024-07-23 01:46:15.259303] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.314 [2024-07-23 01:46:15.259467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.314 [2024-07-23 01:46:15.259487] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.314 [2024-07-23 01:46:15.259502] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.314 [2024-07-23 01:46:15.259586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.314 [2024-07-23 01:46:15.259705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.314 [2024-07-23 01:46:15.259774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.314 [2024-07-23 01:46:15.259771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:03.286 01:46:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.286 01:46:16 -- common/autotest_common.sh@852 -- # return 0 00:25:03.286 01:46:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:03.286 01:46:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.286 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.286 01:46:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.286 01:46:16 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.286 01:46:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.286 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.286 [2024-07-23 01:46:16.051115] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.286 01:46:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.286 01:46:16 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:03.286 01:46:16 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:03.286 01:46:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:03.286 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.286 01:46:16 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:03.286 01:46:16 -- target/shutdown.sh@28 -- # cat 00:25:03.286 01:46:16 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:03.286 01:46:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.286 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.286 Malloc1 00:25:03.286 [2024-07-23 01:46:16.126477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.286 Malloc2 00:25:03.286 Malloc3 00:25:03.286 Malloc4 00:25:03.286 Malloc5 00:25:03.286 Malloc6 00:25:03.545 Malloc7 00:25:03.545 Malloc8 00:25:03.545 Malloc9 00:25:03.545 Malloc10 00:25:03.545 01:46:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.545 01:46:16 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:03.545 01:46:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.545 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.545 01:46:16 -- target/shutdown.sh@102 -- # perfpid=3851865 00:25:03.545 01:46:16 -- target/shutdown.sh@103 -- # waitforlisten 3851865 /var/tmp/bdevperf.sock 00:25:03.545 01:46:16 -- common/autotest_common.sh@819 -- # '[' -z 3851865 ']' 00:25:03.545 01:46:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.545 01:46:16 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:03.545 01:46:16 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:03.545 01:46:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.545 01:46:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.545 01:46:16 -- nvmf/common.sh@520 -- # config=() 00:25:03.545 01:46:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.545 01:46:16 -- nvmf/common.sh@520 -- # local subsystem config 00:25:03.545 01:46:16 -- common/autotest_common.sh@10 -- # set +x 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:03.545 { 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme$subsystem", 00:25:03.545 "trtype": "$TEST_TRANSPORT", 00:25:03.545 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "$NVMF_PORT", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:03.545 "hdgst": ${hdgst:-false}, 00:25:03.545 "ddgst": ${ddgst:-false} 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 } 00:25:03.545 EOF 00:25:03.545 )") 00:25:03.545 01:46:16 -- nvmf/common.sh@542 -- # cat 00:25:03.545 01:46:16 -- nvmf/common.sh@544 -- # jq . 00:25:03.545 01:46:16 -- nvmf/common.sh@545 -- # IFS=, 00:25:03.545 01:46:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme1", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme2", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme3", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme4", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme5", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme6", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme7", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme8", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme9", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 },{ 00:25:03.545 "params": { 00:25:03.545 "name": "Nvme10", 00:25:03.545 "trtype": "tcp", 00:25:03.545 "traddr": "10.0.0.2", 00:25:03.545 "adrfam": "ipv4", 00:25:03.545 "trsvcid": "4420", 00:25:03.545 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:03.545 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:03.545 "hdgst": false, 00:25:03.545 "ddgst": false 00:25:03.545 }, 00:25:03.545 "method": "bdev_nvme_attach_controller" 00:25:03.545 }' 00:25:03.545 [2024-07-23 01:46:16.632208] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:03.545 [2024-07-23 01:46:16.632281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851865 ] 00:25:03.804 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.804 [2024-07-23 01:46:16.696639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.804 [2024-07-23 01:46:16.781216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.702 Running I/O for 10 seconds... 00:25:05.960 01:46:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.960 01:46:19 -- common/autotest_common.sh@852 -- # return 0 00:25:05.960 01:46:19 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:05.960 01:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.960 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:25:06.218 01:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.218 01:46:19 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:06.218 01:46:19 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:06.218 01:46:19 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:06.218 01:46:19 -- target/shutdown.sh@57 -- # local ret=1 00:25:06.218 01:46:19 -- target/shutdown.sh@58 -- # local i 00:25:06.218 01:46:19 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:06.218 01:46:19 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:06.218 01:46:19 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:06.218 01:46:19 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:06.218 01:46:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.218 01:46:19 -- common/autotest_common.sh@10 -- # set +x 00:25:06.218 01:46:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.218 01:46:19 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:06.218 01:46:19 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:06.218 01:46:19 -- target/shutdown.sh@64 -- # ret=0 00:25:06.218 01:46:19 -- target/shutdown.sh@65 -- # break 00:25:06.218 01:46:19 -- target/shutdown.sh@69 -- # return 0 00:25:06.218 01:46:19 -- target/shutdown.sh@109 -- # killprocess 3851865 00:25:06.218 01:46:19 -- common/autotest_common.sh@926 -- # '[' -z 3851865 ']' 00:25:06.218 01:46:19 -- common/autotest_common.sh@930 -- # kill -0 3851865 00:25:06.218 01:46:19 -- common/autotest_common.sh@931 -- # uname 00:25:06.218 01:46:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.218 01:46:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3851865 00:25:06.218 01:46:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:06.218 01:46:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:06.218 01:46:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3851865' 00:25:06.218 killing process with pid 3851865 00:25:06.218 01:46:19 -- common/autotest_common.sh@945 -- # kill 3851865 00:25:06.218 01:46:19 -- common/autotest_common.sh@950 -- # wait 3851865 00:25:06.218 Received shutdown signal, test time was about 0.769197 seconds 00:25:06.218 00:25:06.218 Latency(us) 00:25:06.218 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.218 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.218 Verification LBA range: start 0x0 length 0x400 00:25:06.218 Nvme1n1 : 0.75 360.72 22.54 0.00 0.00 172410.90 24369.68 166218.71 00:25:06.218 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.218 Verification LBA range: start 0x0 length 0x400 00:25:06.218 Nvme2n1 : 0.75 418.99 26.19 0.00 0.00 146662.56 27962.03 117285.17 00:25:06.218 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.218 Verification LBA range: start 0x0 length 0x400 00:25:06.218 Nvme3n1 : 0.75 421.68 26.35 0.00 0.00 143989.03 29321.29 114178.28 00:25:06.218 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.218 Verification LBA range: start 0x0 length 0x400 00:25:06.218 Nvme4n1 : 0.76 416.81 26.05 0.00 0.00 144169.03 27185.30 122722.23 00:25:06.218 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.218 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme5n1 : 0.74 365.21 22.83 0.00 0.00 162920.85 25243.50 144470.47 00:25:06.219 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.219 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme6n1 : 0.76 416.01 26.00 0.00 0.00 141420.91 29709.65 124275.67 00:25:06.219 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.219 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme7n1 : 0.76 414.05 25.88 0.00 0.00 140730.49 26214.40 118838.61 00:25:06.219 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.219 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme8n1 : 0.77 410.91 25.68 0.00 0.00 141093.27 23495.87 118838.61 00:25:06.219 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.219 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme9n1 : 0.76 355.86 22.24 0.00 0.00 161063.47 23301.69 167772.16 00:25:06.219 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:06.219 Verification LBA range: start 0x0 length 0x400 00:25:06.219 Nvme10n1 : 0.77 409.96 25.62 0.00 0.00 139304.34 18544.26 119615.34 00:25:06.219 =================================================================================================================== 00:25:06.219 Total : 3990.20 249.39 0.00 0.00 148689.48 18544.26 167772.16 00:25:06.477 01:46:19 -- target/shutdown.sh@112 -- # sleep 1 00:25:07.415 01:46:20 -- target/shutdown.sh@113 -- # kill -0 3851670 00:25:07.415 01:46:20 -- target/shutdown.sh@115 -- # stoptarget 00:25:07.415 01:46:20 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:07.415 01:46:20 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:07.415 01:46:20 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:07.415 01:46:20 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:07.415 01:46:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:07.415 01:46:20 -- nvmf/common.sh@116 -- # sync 00:25:07.415 01:46:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:07.415 01:46:20 -- nvmf/common.sh@119 -- # set +e 00:25:07.415 01:46:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:07.415 01:46:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:07.415 rmmod nvme_tcp 00:25:07.415 rmmod nvme_fabrics 00:25:07.416 rmmod nvme_keyring 00:25:07.416 01:46:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:07.416 01:46:20 -- nvmf/common.sh@123 -- # set -e 00:25:07.416 01:46:20 -- nvmf/common.sh@124 -- # return 0 00:25:07.416 01:46:20 -- nvmf/common.sh@477 -- # '[' -n 3851670 ']' 00:25:07.416 01:46:20 -- nvmf/common.sh@478 -- # killprocess 3851670 00:25:07.416 01:46:20 -- common/autotest_common.sh@926 -- # '[' -z 3851670 ']' 00:25:07.416 01:46:20 -- common/autotest_common.sh@930 -- # kill -0 3851670 00:25:07.416 01:46:20 -- common/autotest_common.sh@931 -- # uname 00:25:07.416 01:46:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.416 01:46:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3851670 00:25:07.673 01:46:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:07.673 01:46:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:07.673 01:46:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3851670' 00:25:07.673 killing process with pid 3851670 00:25:07.673 01:46:20 -- common/autotest_common.sh@945 -- # kill 3851670 00:25:07.673 01:46:20 -- common/autotest_common.sh@950 -- # wait 3851670 00:25:07.931 01:46:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.931 01:46:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.931 01:46:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.931 01:46:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.931 01:46:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.931 01:46:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.931 01:46:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.931 01:46:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.468 01:46:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:10.468 00:25:10.468 real 0m8.143s 00:25:10.468 user 0m25.503s 00:25:10.468 sys 0m1.539s 00:25:10.468 01:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:10.468 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:25:10.468 ************************************ 00:25:10.468 END TEST nvmf_shutdown_tc2 00:25:10.468 ************************************ 00:25:10.468 01:46:23 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:10.468 01:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:10.468 01:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:10.468 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:25:10.468 ************************************ 00:25:10.468 START TEST nvmf_shutdown_tc3 00:25:10.468 ************************************ 00:25:10.468 01:46:23 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:10.468 01:46:23 -- target/shutdown.sh@120 -- # starttarget 00:25:10.468 01:46:23 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:10.468 01:46:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:10.468 01:46:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.468 01:46:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:10.468 01:46:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:10.468 01:46:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:10.468 01:46:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.468 01:46:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.468 01:46:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.468 01:46:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:10.468 01:46:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:10.468 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:25:10.468 01:46:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:10.468 01:46:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:10.468 01:46:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:10.468 01:46:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:10.468 01:46:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:10.468 01:46:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:10.468 01:46:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:10.468 01:46:23 -- nvmf/common.sh@294 -- # net_devs=() 00:25:10.468 01:46:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:10.468 01:46:23 -- nvmf/common.sh@295 -- # e810=() 00:25:10.468 01:46:23 -- nvmf/common.sh@295 -- # local -ga e810 00:25:10.468 01:46:23 -- nvmf/common.sh@296 -- # x722=() 00:25:10.468 01:46:23 -- nvmf/common.sh@296 -- # local -ga x722 00:25:10.468 01:46:23 -- nvmf/common.sh@297 -- # mlx=() 00:25:10.468 01:46:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:10.468 01:46:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.468 01:46:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:10.468 01:46:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:10.468 01:46:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:10.468 01:46:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.468 01:46:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:10.468 01:46:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.468 01:46:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:10.468 01:46:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.468 01:46:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.468 01:46:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.468 01:46:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.468 01:46:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:10.468 01:46:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.468 01:46:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.468 01:46:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.468 01:46:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.468 01:46:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:10.468 01:46:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:10.468 01:46:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:10.468 01:46:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.468 01:46:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.468 01:46:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.468 01:46:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:10.468 01:46:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.468 01:46:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.468 01:46:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:10.468 01:46:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.468 01:46:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.468 01:46:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:10.468 01:46:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:10.469 01:46:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.469 01:46:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.469 01:46:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.469 01:46:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.469 01:46:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:10.469 01:46:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.469 01:46:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.469 01:46:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.469 01:46:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:10.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:10.469 00:25:10.469 --- 10.0.0.2 ping statistics --- 00:25:10.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.469 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:10.469 01:46:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.155 ms 00:25:10.469 00:25:10.469 --- 10.0.0.1 ping statistics --- 00:25:10.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.469 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:25:10.469 01:46:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.469 01:46:23 -- nvmf/common.sh@410 -- # return 0 00:25:10.469 01:46:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:10.469 01:46:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.469 01:46:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:10.469 01:46:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:10.469 01:46:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.469 01:46:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:10.469 01:46:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:10.469 01:46:23 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:10.469 01:46:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:10.469 01:46:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:10.469 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:25:10.469 01:46:23 -- nvmf/common.sh@469 -- # nvmfpid=3852799 00:25:10.469 01:46:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:10.469 01:46:23 -- nvmf/common.sh@470 -- # waitforlisten 3852799 00:25:10.469 01:46:23 -- common/autotest_common.sh@819 -- # '[' -z 3852799 ']' 00:25:10.469 01:46:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.469 01:46:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:10.469 01:46:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.469 01:46:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:10.469 01:46:23 -- common/autotest_common.sh@10 -- # set +x 00:25:10.469 [2024-07-23 01:46:23.271610] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:10.469 [2024-07-23 01:46:23.271713] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.469 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.469 [2024-07-23 01:46:23.342403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:10.469 [2024-07-23 01:46:23.431671] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:10.469 [2024-07-23 01:46:23.431843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.469 [2024-07-23 01:46:23.431865] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.469 [2024-07-23 01:46:23.431881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.469 [2024-07-23 01:46:23.431966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.469 [2024-07-23 01:46:23.432079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:10.469 [2024-07-23 01:46:23.432146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:10.469 [2024-07-23 01:46:23.432148] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.406 01:46:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:11.406 01:46:24 -- common/autotest_common.sh@852 -- # return 0 00:25:11.406 01:46:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:11.406 01:46:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:11.406 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.406 01:46:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.406 01:46:24 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:11.406 01:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.406 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.406 [2024-07-23 01:46:24.242186] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.406 01:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.406 01:46:24 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:11.406 01:46:24 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:11.406 01:46:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.406 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.406 01:46:24 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:11.406 01:46:24 -- target/shutdown.sh@28 -- # cat 00:25:11.406 01:46:24 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:11.406 01:46:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:11.406 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.406 Malloc1 00:25:11.406 [2024-07-23 01:46:24.317223] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:11.406 Malloc2 00:25:11.406 Malloc3 00:25:11.406 Malloc4 00:25:11.406 Malloc5 00:25:11.664 Malloc6 00:25:11.664 Malloc7 00:25:11.664 Malloc8 00:25:11.664 Malloc9 00:25:11.664 Malloc10 00:25:11.664 01:46:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:11.664 01:46:24 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:11.664 01:46:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:11.664 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 01:46:24 -- target/shutdown.sh@124 -- # perfpid=3852989 00:25:11.924 01:46:24 -- target/shutdown.sh@125 -- # waitforlisten 3852989 /var/tmp/bdevperf.sock 00:25:11.924 01:46:24 -- common/autotest_common.sh@819 -- # '[' -z 3852989 ']' 00:25:11.924 01:46:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:11.924 01:46:24 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:11.924 01:46:24 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:11.924 01:46:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.924 01:46:24 -- nvmf/common.sh@520 -- # config=() 00:25:11.924 01:46:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:11.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:11.924 01:46:24 -- nvmf/common.sh@520 -- # local subsystem config 00:25:11.924 01:46:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.924 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.924 01:46:24 -- common/autotest_common.sh@10 -- # set +x 00:25:11.924 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.924 { 00:25:11.924 "params": { 00:25:11.924 "name": "Nvme$subsystem", 00:25:11.924 "trtype": "$TEST_TRANSPORT", 00:25:11.924 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.924 "adrfam": "ipv4", 00:25:11.924 "trsvcid": "$NVMF_PORT", 00:25:11.924 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.924 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.924 "hdgst": ${hdgst:-false}, 00:25:11.924 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:11.925 { 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme$subsystem", 00:25:11.925 "trtype": "$TEST_TRANSPORT", 00:25:11.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "$NVMF_PORT", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:11.925 "hdgst": ${hdgst:-false}, 00:25:11.925 "ddgst": ${ddgst:-false} 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 } 00:25:11.925 EOF 00:25:11.925 )") 00:25:11.925 01:46:24 -- nvmf/common.sh@542 -- # cat 00:25:11.925 01:46:24 -- nvmf/common.sh@544 -- # jq . 00:25:11.925 01:46:24 -- nvmf/common.sh@545 -- # IFS=, 00:25:11.925 01:46:24 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme1", 00:25:11.925 "trtype": "tcp", 00:25:11.925 "traddr": "10.0.0.2", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "4420", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:11.925 "hdgst": false, 00:25:11.925 "ddgst": false 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 },{ 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme2", 00:25:11.925 "trtype": "tcp", 00:25:11.925 "traddr": "10.0.0.2", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "4420", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:11.925 "hdgst": false, 00:25:11.925 "ddgst": false 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 },{ 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme3", 00:25:11.925 "trtype": "tcp", 00:25:11.925 "traddr": "10.0.0.2", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "4420", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:11.925 "hdgst": false, 00:25:11.925 "ddgst": false 00:25:11.925 }, 00:25:11.925 "method": "bdev_nvme_attach_controller" 00:25:11.925 },{ 00:25:11.925 "params": { 00:25:11.925 "name": "Nvme4", 00:25:11.925 "trtype": "tcp", 00:25:11.925 "traddr": "10.0.0.2", 00:25:11.925 "adrfam": "ipv4", 00:25:11.925 "trsvcid": "4420", 00:25:11.925 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:11.925 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:11.925 "hdgst": false, 00:25:11.925 "ddgst": false 00:25:11.925 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme5", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme6", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme7", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme8", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme9", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 },{ 00:25:11.926 "params": { 00:25:11.926 "name": "Nvme10", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "4420", 00:25:11.926 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:11.926 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:11.926 "hdgst": false, 00:25:11.926 "ddgst": false 00:25:11.926 }, 00:25:11.926 "method": "bdev_nvme_attach_controller" 00:25:11.926 }' 00:25:11.926 [2024-07-23 01:46:24.803488] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:11.926 [2024-07-23 01:46:24.803581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852989 ] 00:25:11.926 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.926 [2024-07-23 01:46:24.867408] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.926 [2024-07-23 01:46:24.951794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.831 Running I/O for 10 seconds... 00:25:14.415 01:46:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:14.415 01:46:27 -- common/autotest_common.sh@852 -- # return 0 00:25:14.415 01:46:27 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:14.415 01:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.415 01:46:27 -- common/autotest_common.sh@10 -- # set +x 00:25:14.415 01:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.415 01:46:27 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.415 01:46:27 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:14.415 01:46:27 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:14.415 01:46:27 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:14.415 01:46:27 -- target/shutdown.sh@57 -- # local ret=1 00:25:14.415 01:46:27 -- target/shutdown.sh@58 -- # local i 00:25:14.415 01:46:27 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:14.415 01:46:27 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:14.415 01:46:27 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:14.415 01:46:27 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:14.415 01:46:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.415 01:46:27 -- common/autotest_common.sh@10 -- # set +x 00:25:14.415 01:46:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.415 01:46:27 -- target/shutdown.sh@60 -- # read_io_count=211 00:25:14.415 01:46:27 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:25:14.415 01:46:27 -- target/shutdown.sh@64 -- # ret=0 00:25:14.415 01:46:27 -- target/shutdown.sh@65 -- # break 00:25:14.415 01:46:27 -- target/shutdown.sh@69 -- # return 0 00:25:14.415 01:46:27 -- target/shutdown.sh@134 -- # killprocess 3852799 00:25:14.415 01:46:27 -- common/autotest_common.sh@926 -- # '[' -z 3852799 ']' 00:25:14.415 01:46:27 -- common/autotest_common.sh@930 -- # kill -0 3852799 00:25:14.415 01:46:27 -- common/autotest_common.sh@931 -- # uname 00:25:14.415 01:46:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:14.415 01:46:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3852799 00:25:14.415 01:46:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:14.415 01:46:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:14.415 01:46:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3852799' 00:25:14.415 killing process with pid 3852799 00:25:14.415 01:46:27 -- common/autotest_common.sh@945 -- # kill 3852799 00:25:14.415 01:46:27 -- common/autotest_common.sh@950 -- # wait 3852799 00:25:14.415 [2024-07-23 01:46:27.383777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.383926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.383944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.383958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.383979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.383992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.415 [2024-07-23 01:46:27.384005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.384745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1644ff0 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386540] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.416 [2024-07-23 01:46:27.386713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.386996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.387239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647980 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.388994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389060] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.417 [2024-07-23 01:46:27.389263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389709] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 01:46:27.389714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:14.418 the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 01:46:27.389758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.389784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.389812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.389810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.389835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with [2024-07-23 01:46:27.389841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:25:14.418 id:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.389857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.389860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af9e40 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with [2024-07-23 01:46:27.389930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:25:14.418 id:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.389966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.389966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.389982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.389990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with [2024-07-23 01:46:27.389995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:14.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.390013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.390015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.390038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.418 [2024-07-23 01:46:27.390059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with [2024-07-23 01:46:27.390065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:14.418 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.418 [2024-07-23 01:46:27.390081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b289b0 is same [2024-07-23 01:46:27.390084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with with the state(5) to be set 00:25:14.418 the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.390147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645480 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.392821] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.418 [2024-07-23 01:46:27.392899] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.418 [2024-07-23 01:46:27.392972] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.418 [2024-07-23 01:46:27.399326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.418 [2024-07-23 01:46:27.399788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.399989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400056] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.400220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645930 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.401990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.419 [2024-07-23 01:46:27.402294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.402476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645dc0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b260a0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcba0 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with [2024-07-23 01:46:27.403517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:25:14.420 id:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with [2024-07-23 01:46:27.403621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:14.420 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.420 [2024-07-23 01:46:27.403653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.420 [2024-07-23 01:46:27.403666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07530 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af9e40 (9): [2024-07-23 01:46:27.403704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with Bad file descriptor 00:25:14.420 the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.420 [2024-07-23 01:46:27.403733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b289b0 (9): Bad file descriptor 00:25:14.421 [2024-07-23 01:46:27.403758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.403996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.404280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646270 is same with the state(5) to be set 00:25:14.421 [2024-07-23 01:46:27.406198] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.421 [2024-07-23 01:46:27.406271] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.421 [2024-07-23 01:46:27.407810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.407838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.407867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.407884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.407902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.407916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.407943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.407957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.407973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.407988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.421 [2024-07-23 01:46:27.408369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.421 [2024-07-23 01:46:27.408383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.408967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.408986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.422 [2024-07-23 01:46:27.409557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.422 [2024-07-23 01:46:27.409574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.423 [2024-07-23 01:46:27.409875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.423 [2024-07-23 01:46:27.409890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb5ae0 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.409977] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb5ae0 was disconnected and freed. reset controller. 00:25:14.423 [2024-07-23 01:46:27.410481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.410989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646720 is same with the state(5) to be set 00:25:14.423 [2024-07-23 01:46:27.411938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:14.423 [2024-07-23 01:46:27.412013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc78e0 (9): Bad file descriptor 00:25:14.424 [2024-07-23 01:46:27.413262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b260a0 (9): Bad file descriptor 00:25:14.424 [2024-07-23 01:46:27.413542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcba0 (9): Bad file descriptor 00:25:14.424 [2024-07-23 01:46:27.413580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 01:46:27.413643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:14.424 the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.424 [2024-07-23 01:46:27.413682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.424 [2024-07-23 01:46:27.413708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-23 01:46:27.413734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:14.424 the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba9dc0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07530 (9): Bad file descriptor 00:25:14.424 [2024-07-23 01:46:27.413817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.424 [2024-07-23 01:46:27.413854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.424 [2024-07-23 01:46:27.413906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-23 01:46:27.413938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:14.424 the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 [2024-07-23 01:46:27.413957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.424 [2024-07-23 01:46:27.413970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 01:46:27.413983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.424 the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.413996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b6d0 is same [2024-07-23 01:46:27.413996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with with the state(5) to be set 00:25:14.424 the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.424 [2024-07-23 01:46:27.414118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.414130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.414146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.414158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.414171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646bb0 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.414381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.414985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.414999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.425 [2024-07-23 01:46:27.415250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.425 [2024-07-23 01:46:27.415253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.425 [2024-07-23 01:46:27.415263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:12[2024-07-23 01:46:27.415303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:46:27.415317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with [2024-07-23 01:46:27.415493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36480 len:1the state(5) to be set 00:25:14.426 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with [2024-07-23 01:46:27.415510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:14.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:46:27.415549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:36864 len:1[2024-07-23 01:46:27.415602] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with [2024-07-23 01:46:27.415623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:14.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37248 len:1[2024-07-23 01:46:27.415723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with [2024-07-23 01:46:27.415737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:14.426 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415777] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 [2024-07-23 01:46:27.415813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.426 [2024-07-23 01:46:27.415826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:46:27.415838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.426 the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.426 [2024-07-23 01:46:27.415855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.415864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.415876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:1[2024-07-23 01:46:27.415888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 01:46:27.415903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.415942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.415955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.415971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.415994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38400 len:1[2024-07-23 01:46:27.416031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647040 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.427 [2024-07-23 01:46:27.416479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.427 [2024-07-23 01:46:27.416493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c256a0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.416989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417112] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.427 [2024-07-23 01:46:27.417155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16474d0 is same with the state(5) to be set 00:25:14.428 [2024-07-23 01:46:27.417741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.417966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.417992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.428 [2024-07-23 01:46:27.418544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.428 [2024-07-23 01:46:27.418558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.418981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.418997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.429 [2024-07-23 01:46:27.419556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.429 [2024-07-23 01:46:27.419570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.419785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.419799] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb70c0 is same with the state(5) to be set 00:25:14.430 [2024-07-23 01:46:27.419871] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cb70c0 was disconnected and freed. reset controller. 00:25:14.430 [2024-07-23 01:46:27.420231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.420974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.420990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.430 [2024-07-23 01:46:27.421209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.430 [2024-07-23 01:46:27.421224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.421255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.421285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.421315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.421344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.421375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.421391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.434977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.434993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.431 [2024-07-23 01:46:27.435496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.435511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbc800 is same with the state(5) to be set 00:25:14.431 [2024-07-23 01:46:27.437497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.431 [2024-07-23 01:46:27.437545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:14.431 [2024-07-23 01:46:27.437771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc78e0 (9): Bad file descriptor 00:25:14.431 [2024-07-23 01:46:27.437828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba9dc0 (9): Bad file descriptor 00:25:14.431 [2024-07-23 01:46:27.437878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.431 [2024-07-23 01:46:27.437912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.437928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.431 [2024-07-23 01:46:27.437943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.431 [2024-07-23 01:46:27.437957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.437972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.437986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.438000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.438013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a56eb0 is same with the state(5) to be set 00:25:14.432 [2024-07-23 01:46:27.438051] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b6d0 (9): Bad file descriptor 00:25:14.432 [2024-07-23 01:46:27.438106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.438131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.438152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.438166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.438180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.438194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.438208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.432 [2024-07-23 01:46:27.438222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.438235] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af51c0 is same with the state(5) to be set 00:25:14.432 [2024-07-23 01:46:27.439633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.439969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.439985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.432 [2024-07-23 01:46:27.440629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.432 [2024-07-23 01:46:27.440647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.440981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.440998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.441670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.441769] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1cbb220 was disconnected and freed. reset controller. 00:25:14.433 [2024-07-23 01:46:27.441823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:14.433 [2024-07-23 01:46:27.442032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.433 [2024-07-23 01:46:27.442170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.433 [2024-07-23 01:46:27.442196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af9e40 with addr=10.0.0.2, port=4420 00:25:14.433 [2024-07-23 01:46:27.442216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af9e40 is same with the state(5) to be set 00:25:14.433 [2024-07-23 01:46:27.442361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.433 [2024-07-23 01:46:27.442506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.433 [2024-07-23 01:46:27.442531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b289b0 with addr=10.0.0.2, port=4420 00:25:14.433 [2024-07-23 01:46:27.442547] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b289b0 is same with the state(5) to be set 00:25:14.433 [2024-07-23 01:46:27.442925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.433 [2024-07-23 01:46:27.442949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.433 [2024-07-23 01:46:27.442971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.442987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.443979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.434 [2024-07-23 01:46:27.443993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.434 [2024-07-23 01:46:27.444010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.444936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.444952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c26810 is same with the state(5) to be set 00:25:14.435 [2024-07-23 01:46:27.446180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.435 [2024-07-23 01:46:27.446413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.435 [2024-07-23 01:46:27.446427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.446981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.446997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.436 [2024-07-23 01:46:27.447601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.436 [2024-07-23 01:46:27.447620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.447983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.447997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.448183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.448198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c27db0 is same with the state(5) to be set 00:25:14.437 [2024-07-23 01:46:27.449433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.449979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.450009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.450023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.450039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.437 [2024-07-23 01:46:27.450055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.437 [2024-07-23 01:46:27.450071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.450972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.450988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.438 [2024-07-23 01:46:27.451247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.438 [2024-07-23 01:46:27.451266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.451473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.451488] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c28c30 is same with the state(5) to be set 00:25:14.439 [2024-07-23 01:46:27.454514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:14.439 [2024-07-23 01:46:27.454559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:14.439 [2024-07-23 01:46:27.454579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:14.439 [2024-07-23 01:46:27.454834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.454988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.455014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba9dc0 with addr=10.0.0.2, port=4420 00:25:14.439 [2024-07-23 01:46:27.455031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba9dc0 is same with the state(5) to be set 00:25:14.439 [2024-07-23 01:46:27.455057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af9e40 (9): Bad file descriptor 00:25:14.439 [2024-07-23 01:46:27.455078] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b289b0 (9): Bad file descriptor 00:25:14.439 [2024-07-23 01:46:27.455095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:14.439 [2024-07-23 01:46:27.455109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:14.439 [2024-07-23 01:46:27.455124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:14.439 [2024-07-23 01:46:27.455192] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.439 [2024-07-23 01:46:27.455227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a56eb0 (9): Bad file descriptor 00:25:14.439 [2024-07-23 01:46:27.455270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af51c0 (9): Bad file descriptor 00:25:14.439 [2024-07-23 01:46:27.455296] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.439 [2024-07-23 01:46:27.455316] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.439 [2024-07-23 01:46:27.455335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba9dc0 (9): Bad file descriptor 00:25:14.439 [2024-07-23 01:46:27.455904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.439 [2024-07-23 01:46:27.455936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:14.439 [2024-07-23 01:46:27.456125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b07530 with addr=10.0.0.2, port=4420 00:25:14.439 [2024-07-23 01:46:27.456313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07530 is same with the state(5) to be set 00:25:14.439 [2024-07-23 01:46:27.456442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b260a0 with addr=10.0.0.2, port=4420 00:25:14.439 [2024-07-23 01:46:27.456639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b260a0 is same with the state(5) to be set 00:25:14.439 [2024-07-23 01:46:27.456781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.439 [2024-07-23 01:46:27.456947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afcba0 with addr=10.0.0.2, port=4420 00:25:14.439 [2024-07-23 01:46:27.456963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcba0 is same with the state(5) to be set 00:25:14.439 [2024-07-23 01:46:27.456980] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.439 [2024-07-23 01:46:27.456994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.439 [2024-07-23 01:46:27.457007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.439 [2024-07-23 01:46:27.457027] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:14.439 [2024-07-23 01:46:27.457042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:14.439 [2024-07-23 01:46:27.457055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:14.439 [2024-07-23 01:46:27.457912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.457938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.457965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.457981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.439 [2024-07-23 01:46:27.458176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.439 [2024-07-23 01:46:27.458192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.458983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.458999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.440 [2024-07-23 01:46:27.459379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.440 [2024-07-23 01:46:27.459394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.459777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.459790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.461035] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:14.441 [2024-07-23 01:46:27.461337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.461358] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.461374] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:14.441 [2024-07-23 01:46:27.461549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.441 [2024-07-23 01:46:27.461694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.441 [2024-07-23 01:46:27.461721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af51c0 with addr=10.0.0.2, port=4420 00:25:14.441 [2024-07-23 01:46:27.461737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af51c0 is same with the state(5) to be set 00:25:14.441 [2024-07-23 01:46:27.461761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07530 (9): Bad file descriptor 00:25:14.441 [2024-07-23 01:46:27.461780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b260a0 (9): Bad file descriptor 00:25:14.441 [2024-07-23 01:46:27.461798] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcba0 (9): Bad file descriptor 00:25:14.441 [2024-07-23 01:46:27.461814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.461828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.461843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:14.441 [2024-07-23 01:46:27.461995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.462136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.441 [2024-07-23 01:46:27.462272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.441 [2024-07-23 01:46:27.462297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b6d0 with addr=10.0.0.2, port=4420 00:25:14.441 [2024-07-23 01:46:27.462314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b6d0 is same with the state(5) to be set 00:25:14.441 [2024-07-23 01:46:27.462333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af51c0 (9): Bad file descriptor 00:25:14.441 [2024-07-23 01:46:27.462350] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.462364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.462377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:14.441 [2024-07-23 01:46:27.462396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.462410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.462423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:14.441 [2024-07-23 01:46:27.462439] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.462454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.462467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:14.441 [2024-07-23 01:46:27.462795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.462816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.462828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.462845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b6d0 (9): Bad file descriptor 00:25:14.441 [2024-07-23 01:46:27.462862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.462875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.462888] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:14.441 [2024-07-23 01:46:27.462940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.462959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:14.441 [2024-07-23 01:46:27.462972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:14.441 [2024-07-23 01:46:27.462986] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:14.441 [2024-07-23 01:46:27.463025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.441 [2024-07-23 01:46:27.464637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.441 [2024-07-23 01:46:27.464905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.441 [2024-07-23 01:46:27.464924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.464939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.464954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.464967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.464983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.464998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.465973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.465988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.466004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.466018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.466034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.466048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.466063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.466077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.466094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.442 [2024-07-23 01:46:27.466108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.442 [2024-07-23 01:46:27.466124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.443 [2024-07-23 01:46:27.466672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.443 [2024-07-23 01:46:27.466687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cb9c80 is same with the state(5) to be set 00:25:14.443 [2024-07-23 01:46:27.468739] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:14.443 [2024-07-23 01:46:27.468772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:14.443 [2024-07-23 01:46:27.468790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:14.443 [2024-07-23 01:46:27.468806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:14.443 task offset: 34816 on job bdev=Nvme5n1 fails 00:25:14.443 00:25:14.443 Latency(us) 00:25:14.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.443 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme1n1 ended in about 0.69 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme1n1 : 0.69 365.47 22.84 93.19 0.00 138504.50 84274.44 114955.00 00:25:14.443 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme2n1 ended in about 0.72 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme2n1 : 0.72 290.83 18.18 89.49 0.00 165353.47 83109.36 159228.21 00:25:14.443 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme3n1 ended in about 0.72 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme3n1 : 0.72 289.53 18.10 89.09 0.00 164337.59 83109.36 149907.53 00:25:14.443 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme4n1 ended in about 0.72 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme4n1 : 0.72 347.79 21.74 88.68 0.00 141031.93 78060.66 119615.34 00:25:14.443 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme5n1 ended in about 0.68 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme5n1 : 0.68 368.71 23.04 94.01 0.00 131149.06 35535.08 123498.95 00:25:14.443 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme6n1 ended in about 0.71 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme6n1 : 0.71 293.56 18.35 90.33 0.00 156686.01 47380.10 170879.05 00:25:14.443 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme7n1 ended in about 0.73 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme7n1 : 0.73 291.79 18.24 80.82 0.00 159938.27 7039.05 177092.84 00:25:14.443 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme8n1 ended in about 0.74 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme8n1 : 0.74 340.62 21.29 86.85 0.00 138085.93 69128.34 115731.72 00:25:14.443 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme9n1 ended in about 0.72 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme9n1 : 0.72 347.02 21.69 88.48 0.00 133792.69 48545.19 114178.28 00:25:14.443 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:14.443 Job: Nvme10n1 ended in about 0.71 seconds with error 00:25:14.443 Verification LBA range: start 0x0 length 0x400 00:25:14.443 Nvme10n1 : 0.71 232.36 14.52 90.68 0.00 177896.81 80390.83 192627.29 00:25:14.443 =================================================================================================================== 00:25:14.443 Total : 3167.68 197.98 891.61 0.00 149209.94 7039.05 192627.29 00:25:14.703 [2024-07-23 01:46:27.499573] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:14.703 [2024-07-23 01:46:27.499697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:14.703 [2024-07-23 01:46:27.500252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.500441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.500469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bc78e0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.500489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bc78e0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.500634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.500783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.500809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b289b0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.500826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b289b0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.501090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af9e40 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.501268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af9e40 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.501406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ba9dc0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.501598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba9dc0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.501741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.501903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a56eb0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.501920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a56eb0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.501966] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.703 [2024-07-23 01:46:27.501990] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.703 [2024-07-23 01:46:27.502011] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.703 [2024-07-23 01:46:27.502030] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:14.703 [2024-07-23 01:46:27.502334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:14.703 [2024-07-23 01:46:27.502360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:14.703 [2024-07-23 01:46:27.502378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:14.703 [2024-07-23 01:46:27.502394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:14.703 [2024-07-23 01:46:27.502467] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bc78e0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.502496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b289b0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.502521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af9e40 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.502539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba9dc0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.502557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a56eb0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.502641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:14.703 [2024-07-23 01:46:27.502796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.502935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.502959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1afcba0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.502976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afcba0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.503120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b260a0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.503294] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b260a0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.503417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b07530 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.503604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b07530 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.503743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.503920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1af51c0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.503937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af51c0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.503952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.503966] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.503982] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504016] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504241] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504265] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.504533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.703 [2024-07-23 01:46:27.504558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c3b6d0 with addr=10.0.0.2, port=4420 00:25:14.703 [2024-07-23 01:46:27.504574] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c3b6d0 is same with the state(5) to be set 00:25:14.703 [2024-07-23 01:46:27.504593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1afcba0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.504621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b260a0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.504642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b07530 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.504667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af51c0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.504726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c3b6d0 (9): Bad file descriptor 00:25:14.703 [2024-07-23 01:46:27.504751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504810] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504852] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504866] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504881] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.504896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.504909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:14.703 [2024-07-23 01:46:27.504947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.504997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.703 [2024-07-23 01:46:27.505009] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:14.703 [2024-07-23 01:46:27.505022] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:14.703 [2024-07-23 01:46:27.505035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:14.703 [2024-07-23 01:46:27.505073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:14.960 01:46:27 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:14.960 01:46:27 -- target/shutdown.sh@138 -- # sleep 1 00:25:15.899 01:46:28 -- target/shutdown.sh@141 -- # kill -9 3852989 00:25:15.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3852989) - No such process 00:25:15.899 01:46:28 -- target/shutdown.sh@141 -- # true 00:25:15.899 01:46:28 -- target/shutdown.sh@143 -- # stoptarget 00:25:15.899 01:46:28 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:15.899 01:46:28 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:15.899 01:46:28 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:15.899 01:46:28 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:15.899 01:46:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:15.899 01:46:28 -- nvmf/common.sh@116 -- # sync 00:25:15.899 01:46:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:15.899 01:46:28 -- nvmf/common.sh@119 -- # set +e 00:25:15.899 01:46:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:15.899 01:46:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:15.899 rmmod nvme_tcp 00:25:15.899 rmmod nvme_fabrics 00:25:15.899 rmmod nvme_keyring 00:25:15.899 01:46:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:15.899 01:46:28 -- nvmf/common.sh@123 -- # set -e 00:25:15.899 01:46:28 -- nvmf/common.sh@124 -- # return 0 00:25:15.899 01:46:28 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:15.899 01:46:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:15.899 01:46:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:15.899 01:46:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:15.899 01:46:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.899 01:46:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:15.899 01:46:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.899 01:46:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.899 01:46:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.492 01:46:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:18.492 00:25:18.492 real 0m7.977s 00:25:18.492 user 0m20.587s 00:25:18.492 sys 0m1.535s 00:25:18.492 01:46:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 ************************************ 00:25:18.492 END TEST nvmf_shutdown_tc3 00:25:18.492 ************************************ 00:25:18.492 01:46:31 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:18.492 00:25:18.492 real 0m28.259s 00:25:18.492 user 1m21.096s 00:25:18.492 sys 0m6.427s 00:25:18.492 01:46:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 ************************************ 00:25:18.492 END TEST nvmf_shutdown 00:25:18.492 ************************************ 00:25:18.492 01:46:31 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:18.492 01:46:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 01:46:31 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:18.492 01:46:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 01:46:31 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:18.492 01:46:31 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:18.492 01:46:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:18.492 01:46:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:18.492 ************************************ 00:25:18.492 START TEST nvmf_multicontroller 00:25:18.492 ************************************ 00:25:18.492 01:46:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:18.492 * Looking for test storage... 00:25:18.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.492 01:46:31 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.492 01:46:31 -- nvmf/common.sh@7 -- # uname -s 00:25:18.492 01:46:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.492 01:46:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.492 01:46:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.492 01:46:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.492 01:46:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.492 01:46:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.492 01:46:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.492 01:46:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.492 01:46:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.492 01:46:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.492 01:46:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.492 01:46:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:18.492 01:46:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.492 01:46:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.492 01:46:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.492 01:46:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.492 01:46:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.492 01:46:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.492 01:46:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.492 01:46:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.492 01:46:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.492 01:46:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.492 01:46:31 -- paths/export.sh@5 -- # export PATH 00:25:18.492 01:46:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.492 01:46:31 -- nvmf/common.sh@46 -- # : 0 00:25:18.492 01:46:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:18.492 01:46:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:18.492 01:46:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:18.492 01:46:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.492 01:46:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.492 01:46:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:18.492 01:46:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:18.492 01:46:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:18.492 01:46:31 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:18.492 01:46:31 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:18.492 01:46:31 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:18.492 01:46:31 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:18.492 01:46:31 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:18.492 01:46:31 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:18.492 01:46:31 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:18.492 01:46:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:18.492 01:46:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.492 01:46:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:18.492 01:46:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:18.492 01:46:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:18.492 01:46:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.492 01:46:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.492 01:46:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.492 01:46:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:18.492 01:46:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:18.492 01:46:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:18.492 01:46:31 -- common/autotest_common.sh@10 -- # set +x 00:25:20.406 01:46:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:20.406 01:46:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:20.406 01:46:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:20.406 01:46:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:20.406 01:46:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:20.406 01:46:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:20.406 01:46:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:20.406 01:46:33 -- nvmf/common.sh@294 -- # net_devs=() 00:25:20.406 01:46:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:20.406 01:46:33 -- nvmf/common.sh@295 -- # e810=() 00:25:20.406 01:46:33 -- nvmf/common.sh@295 -- # local -ga e810 00:25:20.406 01:46:33 -- nvmf/common.sh@296 -- # x722=() 00:25:20.406 01:46:33 -- nvmf/common.sh@296 -- # local -ga x722 00:25:20.406 01:46:33 -- nvmf/common.sh@297 -- # mlx=() 00:25:20.406 01:46:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:20.406 01:46:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.406 01:46:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:20.406 01:46:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:20.406 01:46:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:20.406 01:46:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.406 01:46:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:20.406 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:20.406 01:46:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.406 01:46:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:20.406 01:46:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:20.406 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:20.406 01:46:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:20.407 01:46:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.407 01:46:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.407 01:46:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.407 01:46:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.407 01:46:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:20.407 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:20.407 01:46:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.407 01:46:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:20.407 01:46:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.407 01:46:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:20.407 01:46:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.407 01:46:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:20.407 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:20.407 01:46:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.407 01:46:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:20.407 01:46:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:20.407 01:46:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:20.407 01:46:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.407 01:46:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.407 01:46:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.407 01:46:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:20.407 01:46:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.407 01:46:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.407 01:46:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:20.407 01:46:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.407 01:46:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.407 01:46:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:20.407 01:46:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:20.407 01:46:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.407 01:46:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.407 01:46:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.407 01:46:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.407 01:46:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:20.407 01:46:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.407 01:46:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.407 01:46:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.407 01:46:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:20.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:25:20.407 00:25:20.407 --- 10.0.0.2 ping statistics --- 00:25:20.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.407 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:25:20.407 01:46:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.097 ms 00:25:20.407 00:25:20.407 --- 10.0.0.1 ping statistics --- 00:25:20.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.407 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:20.407 01:46:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.407 01:46:33 -- nvmf/common.sh@410 -- # return 0 00:25:20.407 01:46:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:20.407 01:46:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.407 01:46:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:20.407 01:46:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.407 01:46:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:20.407 01:46:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:20.407 01:46:33 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:20.407 01:46:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:20.407 01:46:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:20.407 01:46:33 -- common/autotest_common.sh@10 -- # set +x 00:25:20.407 01:46:33 -- nvmf/common.sh@469 -- # nvmfpid=3855535 00:25:20.407 01:46:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:20.407 01:46:33 -- nvmf/common.sh@470 -- # waitforlisten 3855535 00:25:20.407 01:46:33 -- common/autotest_common.sh@819 -- # '[' -z 3855535 ']' 00:25:20.407 01:46:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.407 01:46:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:20.407 01:46:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.407 01:46:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:20.407 01:46:33 -- common/autotest_common.sh@10 -- # set +x 00:25:20.407 [2024-07-23 01:46:33.283507] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:20.407 [2024-07-23 01:46:33.283590] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.407 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.407 [2024-07-23 01:46:33.354056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:20.407 [2024-07-23 01:46:33.443915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:20.407 [2024-07-23 01:46:33.444077] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.407 [2024-07-23 01:46:33.444093] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.407 [2024-07-23 01:46:33.444105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.407 [2024-07-23 01:46:33.444165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.407 [2024-07-23 01:46:33.444222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:20.407 [2024-07-23 01:46:33.444225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.344 01:46:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:21.344 01:46:34 -- common/autotest_common.sh@852 -- # return 0 00:25:21.344 01:46:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:21.344 01:46:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:21.344 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.344 01:46:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.344 01:46:34 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.344 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.344 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.344 [2024-07-23 01:46:34.327802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.344 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.344 01:46:34 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:21.344 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.344 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.344 Malloc0 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 [2024-07-23 01:46:34.398715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 [2024-07-23 01:46:34.406576] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 Malloc1 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.345 01:46:34 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:21.345 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.345 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.603 01:46:34 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:21.603 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.603 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.603 01:46:34 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:21.603 01:46:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:21.603 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 01:46:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:21.603 01:46:34 -- host/multicontroller.sh@44 -- # bdevperf_pid=3855689 00:25:21.603 01:46:34 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:21.603 01:46:34 -- host/multicontroller.sh@47 -- # waitforlisten 3855689 /var/tmp/bdevperf.sock 00:25:21.603 01:46:34 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:21.603 01:46:34 -- common/autotest_common.sh@819 -- # '[' -z 3855689 ']' 00:25:21.603 01:46:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:21.603 01:46:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.603 01:46:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:21.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:21.603 01:46:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.603 01:46:34 -- common/autotest_common.sh@10 -- # set +x 00:25:22.540 01:46:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.540 01:46:35 -- common/autotest_common.sh@852 -- # return 0 00:25:22.540 01:46:35 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:22.540 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.540 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.798 NVMe0n1 00:25:22.798 01:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.798 01:46:35 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:22.798 01:46:35 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:22.798 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.798 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.798 01:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.798 1 00:25:22.798 01:46:35 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:22.798 01:46:35 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.798 01:46:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:22.798 01:46:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:22.798 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.798 01:46:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:22.798 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.798 01:46:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 request: 00:25:22.799 { 00:25:22.799 "name": "NVMe0", 00:25:22.799 "trtype": "tcp", 00:25:22.799 "traddr": "10.0.0.2", 00:25:22.799 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:22.799 "hostaddr": "10.0.0.2", 00:25:22.799 "hostsvcid": "60000", 00:25:22.799 "adrfam": "ipv4", 00:25:22.799 "trsvcid": "4420", 00:25:22.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.799 "method": "bdev_nvme_attach_controller", 00:25:22.799 "req_id": 1 00:25:22.799 } 00:25:22.799 Got JSON-RPC error response 00:25:22.799 response: 00:25:22.799 { 00:25:22.799 "code": -114, 00:25:22.799 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:22.799 } 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # es=1 00:25:22.799 01:46:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:22.799 01:46:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:22.799 01:46:35 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:22.799 01:46:35 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.799 01:46:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:22.799 01:46:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 request: 00:25:22.799 { 00:25:22.799 "name": "NVMe0", 00:25:22.799 "trtype": "tcp", 00:25:22.799 "traddr": "10.0.0.2", 00:25:22.799 "hostaddr": "10.0.0.2", 00:25:22.799 "hostsvcid": "60000", 00:25:22.799 "adrfam": "ipv4", 00:25:22.799 "trsvcid": "4420", 00:25:22.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:22.799 "method": "bdev_nvme_attach_controller", 00:25:22.799 "req_id": 1 00:25:22.799 } 00:25:22.799 Got JSON-RPC error response 00:25:22.799 response: 00:25:22.799 { 00:25:22.799 "code": -114, 00:25:22.799 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:22.799 } 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # es=1 00:25:22.799 01:46:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:22.799 01:46:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:22.799 01:46:35 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.799 01:46:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 request: 00:25:22.799 { 00:25:22.799 "name": "NVMe0", 00:25:22.799 "trtype": "tcp", 00:25:22.799 "traddr": "10.0.0.2", 00:25:22.799 "hostaddr": "10.0.0.2", 00:25:22.799 "hostsvcid": "60000", 00:25:22.799 "adrfam": "ipv4", 00:25:22.799 "trsvcid": "4420", 00:25:22.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.799 "multipath": "disable", 00:25:22.799 "method": "bdev_nvme_attach_controller", 00:25:22.799 "req_id": 1 00:25:22.799 } 00:25:22.799 Got JSON-RPC error response 00:25:22.799 response: 00:25:22.799 { 00:25:22.799 "code": -114, 00:25:22.799 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:22.799 } 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # es=1 00:25:22.799 01:46:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:22.799 01:46:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:22.799 01:46:35 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:22.799 01:46:35 -- common/autotest_common.sh@640 -- # local es=0 00:25:22.799 01:46:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:22.799 01:46:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:22.799 01:46:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 request: 00:25:22.799 { 00:25:22.799 "name": "NVMe0", 00:25:22.799 "trtype": "tcp", 00:25:22.799 "traddr": "10.0.0.2", 00:25:22.799 "hostaddr": "10.0.0.2", 00:25:22.799 "hostsvcid": "60000", 00:25:22.799 "adrfam": "ipv4", 00:25:22.799 "trsvcid": "4420", 00:25:22.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:22.799 "multipath": "failover", 00:25:22.799 "method": "bdev_nvme_attach_controller", 00:25:22.799 "req_id": 1 00:25:22.799 } 00:25:22.799 Got JSON-RPC error response 00:25:22.799 response: 00:25:22.799 { 00:25:22.799 "code": -114, 00:25:22.799 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:22.799 } 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@643 -- # es=1 00:25:22.799 01:46:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:22.799 01:46:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:22.799 01:46:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:22.799 01:46:35 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.799 01:46:35 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:22.799 01:46:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.799 01:46:35 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:22.799 01:46:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.799 01:46:35 -- common/autotest_common.sh@10 -- # set +x 00:25:23.058 00:25:23.058 01:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.058 01:46:36 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:23.058 01:46:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.058 01:46:36 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:23.058 01:46:36 -- common/autotest_common.sh@10 -- # set +x 00:25:23.058 01:46:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.058 01:46:36 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:23.058 01:46:36 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.433 0 00:25:24.433 01:46:37 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:24.433 01:46:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.433 01:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 01:46:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.433 01:46:37 -- host/multicontroller.sh@100 -- # killprocess 3855689 00:25:24.433 01:46:37 -- common/autotest_common.sh@926 -- # '[' -z 3855689 ']' 00:25:24.433 01:46:37 -- common/autotest_common.sh@930 -- # kill -0 3855689 00:25:24.433 01:46:37 -- common/autotest_common.sh@931 -- # uname 00:25:24.433 01:46:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:24.433 01:46:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3855689 00:25:24.433 01:46:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:24.433 01:46:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:24.433 01:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3855689' 00:25:24.433 killing process with pid 3855689 00:25:24.433 01:46:37 -- common/autotest_common.sh@945 -- # kill 3855689 00:25:24.433 01:46:37 -- common/autotest_common.sh@950 -- # wait 3855689 00:25:24.433 01:46:37 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.433 01:46:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.433 01:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 01:46:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.433 01:46:37 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:24.433 01:46:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.433 01:46:37 -- common/autotest_common.sh@10 -- # set +x 00:25:24.433 01:46:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.433 01:46:37 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:24.433 01:46:37 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.433 01:46:37 -- common/autotest_common.sh@1597 -- # read -r file 00:25:24.433 01:46:37 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:24.433 01:46:37 -- common/autotest_common.sh@1596 -- # sort -u 00:25:24.433 01:46:37 -- common/autotest_common.sh@1598 -- # cat 00:25:24.433 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:24.433 [2024-07-23 01:46:34.509771] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:24.433 [2024-07-23 01:46:34.509858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3855689 ] 00:25:24.433 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.433 [2024-07-23 01:46:34.571168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:24.433 [2024-07-23 01:46:34.656975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.433 [2024-07-23 01:46:36.060190] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name ad81851d-d335-4bc7-a749-b3ddf0cd99f5 already exists 00:25:24.433 [2024-07-23 01:46:36.060232] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:ad81851d-d335-4bc7-a749-b3ddf0cd99f5 alias for bdev NVMe1n1 00:25:24.433 [2024-07-23 01:46:36.060250] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:24.433 Running I/O for 1 seconds... 00:25:24.433 00:25:24.433 Latency(us) 00:25:24.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.434 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:24.434 NVMe0n1 : 1.01 16205.11 63.30 0.00 0.00 7865.75 7136.14 16990.81 00:25:24.434 =================================================================================================================== 00:25:24.434 Total : 16205.11 63.30 0.00 0.00 7865.75 7136.14 16990.81 00:25:24.434 Received shutdown signal, test time was about 1.000000 seconds 00:25:24.434 00:25:24.434 Latency(us) 00:25:24.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:24.434 =================================================================================================================== 00:25:24.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:24.434 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:24.434 01:46:37 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:24.434 01:46:37 -- common/autotest_common.sh@1597 -- # read -r file 00:25:24.434 01:46:37 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:24.434 01:46:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:24.434 01:46:37 -- nvmf/common.sh@116 -- # sync 00:25:24.694 01:46:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:24.694 01:46:37 -- nvmf/common.sh@119 -- # set +e 00:25:24.694 01:46:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:24.694 01:46:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:24.694 rmmod nvme_tcp 00:25:24.694 rmmod nvme_fabrics 00:25:24.694 rmmod nvme_keyring 00:25:24.694 01:46:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:24.694 01:46:37 -- nvmf/common.sh@123 -- # set -e 00:25:24.694 01:46:37 -- nvmf/common.sh@124 -- # return 0 00:25:24.694 01:46:37 -- nvmf/common.sh@477 -- # '[' -n 3855535 ']' 00:25:24.694 01:46:37 -- nvmf/common.sh@478 -- # killprocess 3855535 00:25:24.694 01:46:37 -- common/autotest_common.sh@926 -- # '[' -z 3855535 ']' 00:25:24.694 01:46:37 -- common/autotest_common.sh@930 -- # kill -0 3855535 00:25:24.694 01:46:37 -- common/autotest_common.sh@931 -- # uname 00:25:24.694 01:46:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:24.694 01:46:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3855535 00:25:24.694 01:46:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:24.694 01:46:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:24.694 01:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3855535' 00:25:24.694 killing process with pid 3855535 00:25:24.694 01:46:37 -- common/autotest_common.sh@945 -- # kill 3855535 00:25:24.694 01:46:37 -- common/autotest_common.sh@950 -- # wait 3855535 00:25:24.954 01:46:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:24.954 01:46:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:24.954 01:46:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:24.954 01:46:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:24.954 01:46:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:24.954 01:46:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.954 01:46:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.954 01:46:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.860 01:46:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:26.860 00:25:26.860 real 0m8.854s 00:25:26.860 user 0m17.443s 00:25:26.860 sys 0m2.306s 00:25:27.118 01:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.118 01:46:39 -- common/autotest_common.sh@10 -- # set +x 00:25:27.118 ************************************ 00:25:27.118 END TEST nvmf_multicontroller 00:25:27.118 ************************************ 00:25:27.118 01:46:39 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:27.118 01:46:39 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:27.118 01:46:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.118 01:46:39 -- common/autotest_common.sh@10 -- # set +x 00:25:27.118 ************************************ 00:25:27.118 START TEST nvmf_aer 00:25:27.118 ************************************ 00:25:27.118 01:46:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:27.118 * Looking for test storage... 00:25:27.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.118 01:46:40 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.118 01:46:40 -- nvmf/common.sh@7 -- # uname -s 00:25:27.118 01:46:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.118 01:46:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.118 01:46:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.118 01:46:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.118 01:46:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.118 01:46:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.118 01:46:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.118 01:46:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.118 01:46:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.118 01:46:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.118 01:46:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.118 01:46:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.118 01:46:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.118 01:46:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.118 01:46:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.118 01:46:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.118 01:46:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.118 01:46:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.118 01:46:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.118 01:46:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.118 01:46:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.118 01:46:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.118 01:46:40 -- paths/export.sh@5 -- # export PATH 00:25:27.118 01:46:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.119 01:46:40 -- nvmf/common.sh@46 -- # : 0 00:25:27.119 01:46:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:27.119 01:46:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:27.119 01:46:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:27.119 01:46:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.119 01:46:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.119 01:46:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:27.119 01:46:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:27.119 01:46:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:27.119 01:46:40 -- host/aer.sh@11 -- # nvmftestinit 00:25:27.119 01:46:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.119 01:46:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.119 01:46:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.119 01:46:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.119 01:46:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.119 01:46:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.119 01:46:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.119 01:46:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.119 01:46:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:27.119 01:46:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:27.119 01:46:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:27.119 01:46:40 -- common/autotest_common.sh@10 -- # set +x 00:25:29.024 01:46:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.024 01:46:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:29.024 01:46:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:29.024 01:46:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:29.024 01:46:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:29.024 01:46:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:29.024 01:46:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:29.024 01:46:41 -- nvmf/common.sh@294 -- # net_devs=() 00:25:29.024 01:46:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:29.024 01:46:41 -- nvmf/common.sh@295 -- # e810=() 00:25:29.024 01:46:41 -- nvmf/common.sh@295 -- # local -ga e810 00:25:29.024 01:46:41 -- nvmf/common.sh@296 -- # x722=() 00:25:29.024 01:46:41 -- nvmf/common.sh@296 -- # local -ga x722 00:25:29.024 01:46:41 -- nvmf/common.sh@297 -- # mlx=() 00:25:29.024 01:46:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:29.024 01:46:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.024 01:46:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:29.024 01:46:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:29.024 01:46:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.024 01:46:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:29.024 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:29.024 01:46:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.024 01:46:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:29.024 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:29.024 01:46:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.024 01:46:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.024 01:46:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.024 01:46:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:29.024 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:29.024 01:46:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.024 01:46:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.024 01:46:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.024 01:46:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.024 01:46:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:29.024 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:29.024 01:46:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.024 01:46:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:29.024 01:46:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:29.024 01:46:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:29.024 01:46:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.024 01:46:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.024 01:46:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.024 01:46:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:29.024 01:46:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.024 01:46:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.024 01:46:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:29.024 01:46:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.024 01:46:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.024 01:46:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:29.024 01:46:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:29.024 01:46:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.024 01:46:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.024 01:46:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.024 01:46:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.024 01:46:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:29.024 01:46:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.024 01:46:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.024 01:46:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.024 01:46:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:29.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:25:29.024 00:25:29.024 --- 10.0.0.2 ping statistics --- 00:25:29.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.024 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:29.024 01:46:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:25:29.024 00:25:29.024 --- 10.0.0.1 ping statistics --- 00:25:29.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.024 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:29.024 01:46:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.024 01:46:42 -- nvmf/common.sh@410 -- # return 0 00:25:29.024 01:46:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.024 01:46:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.024 01:46:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.024 01:46:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.024 01:46:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.024 01:46:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.024 01:46:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.024 01:46:42 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:29.024 01:46:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:29.024 01:46:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.024 01:46:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.024 01:46:42 -- nvmf/common.sh@469 -- # nvmfpid=3857946 00:25:29.024 01:46:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.024 01:46:42 -- nvmf/common.sh@470 -- # waitforlisten 3857946 00:25:29.024 01:46:42 -- common/autotest_common.sh@819 -- # '[' -z 3857946 ']' 00:25:29.024 01:46:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.024 01:46:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.024 01:46:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.024 01:46:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.024 01:46:42 -- common/autotest_common.sh@10 -- # set +x 00:25:29.024 [2024-07-23 01:46:42.101023] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:29.024 [2024-07-23 01:46:42.101117] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.284 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.284 [2024-07-23 01:46:42.175736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.284 [2024-07-23 01:46:42.266447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:29.284 [2024-07-23 01:46:42.266624] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.284 [2024-07-23 01:46:42.266645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.284 [2024-07-23 01:46:42.266660] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.284 [2024-07-23 01:46:42.266733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.284 [2024-07-23 01:46:42.266790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.284 [2024-07-23 01:46:42.267030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.284 [2024-07-23 01:46:42.267033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.221 01:46:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.221 01:46:43 -- common/autotest_common.sh@852 -- # return 0 00:25:30.221 01:46:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:30.221 01:46:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 01:46:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.221 01:46:43 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 [2024-07-23 01:46:43.101233] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 Malloc0 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 [2024-07-23 01:46:43.155000] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:30.221 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.221 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.221 [2024-07-23 01:46:43.162707] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:30.221 [ 00:25:30.221 { 00:25:30.221 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:30.221 "subtype": "Discovery", 00:25:30.221 "listen_addresses": [], 00:25:30.221 "allow_any_host": true, 00:25:30.221 "hosts": [] 00:25:30.221 }, 00:25:30.221 { 00:25:30.221 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.221 "subtype": "NVMe", 00:25:30.221 "listen_addresses": [ 00:25:30.221 { 00:25:30.221 "transport": "TCP", 00:25:30.221 "trtype": "TCP", 00:25:30.221 "adrfam": "IPv4", 00:25:30.221 "traddr": "10.0.0.2", 00:25:30.221 "trsvcid": "4420" 00:25:30.221 } 00:25:30.221 ], 00:25:30.221 "allow_any_host": true, 00:25:30.221 "hosts": [], 00:25:30.221 "serial_number": "SPDK00000000000001", 00:25:30.221 "model_number": "SPDK bdev Controller", 00:25:30.221 "max_namespaces": 2, 00:25:30.221 "min_cntlid": 1, 00:25:30.221 "max_cntlid": 65519, 00:25:30.221 "namespaces": [ 00:25:30.221 { 00:25:30.221 "nsid": 1, 00:25:30.221 "bdev_name": "Malloc0", 00:25:30.221 "name": "Malloc0", 00:25:30.221 "nguid": "EC2E851D39374437B569C38BFBF6D5B0", 00:25:30.221 "uuid": "ec2e851d-3937-4437-b569-c38bfbf6d5b0" 00:25:30.221 } 00:25:30.221 ] 00:25:30.221 } 00:25:30.221 ] 00:25:30.221 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.221 01:46:43 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:30.221 01:46:43 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:30.221 01:46:43 -- host/aer.sh@33 -- # aerpid=3858095 00:25:30.221 01:46:43 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:30.221 01:46:43 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:30.221 01:46:43 -- common/autotest_common.sh@1244 -- # local i=0 00:25:30.221 01:46:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:30.221 01:46:43 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:30.221 01:46:43 -- common/autotest_common.sh@1247 -- # i=1 00:25:30.221 01:46:43 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:30.221 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.221 01:46:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:30.221 01:46:43 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:30.221 01:46:43 -- common/autotest_common.sh@1247 -- # i=2 00:25:30.221 01:46:43 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:30.479 01:46:43 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:30.479 01:46:43 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:30.479 01:46:43 -- common/autotest_common.sh@1255 -- # return 0 00:25:30.479 01:46:43 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 Malloc1 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 Asynchronous Event Request test 00:25:30.479 Attaching to 10.0.0.2 00:25:30.479 Attached to 10.0.0.2 00:25:30.479 Registering asynchronous event callbacks... 00:25:30.479 Starting namespace attribute notice tests for all controllers... 00:25:30.479 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:30.479 aer_cb - Changed Namespace 00:25:30.479 Cleaning up... 00:25:30.479 [ 00:25:30.479 { 00:25:30.479 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:30.479 "subtype": "Discovery", 00:25:30.479 "listen_addresses": [], 00:25:30.479 "allow_any_host": true, 00:25:30.479 "hosts": [] 00:25:30.479 }, 00:25:30.479 { 00:25:30.479 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.479 "subtype": "NVMe", 00:25:30.479 "listen_addresses": [ 00:25:30.479 { 00:25:30.479 "transport": "TCP", 00:25:30.479 "trtype": "TCP", 00:25:30.479 "adrfam": "IPv4", 00:25:30.479 "traddr": "10.0.0.2", 00:25:30.479 "trsvcid": "4420" 00:25:30.479 } 00:25:30.479 ], 00:25:30.479 "allow_any_host": true, 00:25:30.479 "hosts": [], 00:25:30.479 "serial_number": "SPDK00000000000001", 00:25:30.479 "model_number": "SPDK bdev Controller", 00:25:30.479 "max_namespaces": 2, 00:25:30.479 "min_cntlid": 1, 00:25:30.479 "max_cntlid": 65519, 00:25:30.479 "namespaces": [ 00:25:30.479 { 00:25:30.479 "nsid": 1, 00:25:30.479 "bdev_name": "Malloc0", 00:25:30.479 "name": "Malloc0", 00:25:30.479 "nguid": "EC2E851D39374437B569C38BFBF6D5B0", 00:25:30.479 "uuid": "ec2e851d-3937-4437-b569-c38bfbf6d5b0" 00:25:30.479 }, 00:25:30.479 { 00:25:30.479 "nsid": 2, 00:25:30.479 "bdev_name": "Malloc1", 00:25:30.479 "name": "Malloc1", 00:25:30.479 "nguid": "8AE1D8C7FC3749379F22F2C2FDCE6FDA", 00:25:30.479 "uuid": "8ae1d8c7-fc37-4937-9f22-f2c2fdce6fda" 00:25:30.479 } 00:25:30.479 ] 00:25:30.479 } 00:25:30.479 ] 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@43 -- # wait 3858095 00:25:30.479 01:46:43 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:30.479 01:46:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.479 01:46:43 -- common/autotest_common.sh@10 -- # set +x 00:25:30.479 01:46:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.479 01:46:43 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:30.479 01:46:43 -- host/aer.sh@51 -- # nvmftestfini 00:25:30.479 01:46:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:30.479 01:46:43 -- nvmf/common.sh@116 -- # sync 00:25:30.479 01:46:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:30.479 01:46:43 -- nvmf/common.sh@119 -- # set +e 00:25:30.479 01:46:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:30.479 01:46:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:30.479 rmmod nvme_tcp 00:25:30.479 rmmod nvme_fabrics 00:25:30.479 rmmod nvme_keyring 00:25:30.479 01:46:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:30.479 01:46:43 -- nvmf/common.sh@123 -- # set -e 00:25:30.479 01:46:43 -- nvmf/common.sh@124 -- # return 0 00:25:30.479 01:46:43 -- nvmf/common.sh@477 -- # '[' -n 3857946 ']' 00:25:30.479 01:46:43 -- nvmf/common.sh@478 -- # killprocess 3857946 00:25:30.479 01:46:43 -- common/autotest_common.sh@926 -- # '[' -z 3857946 ']' 00:25:30.479 01:46:43 -- common/autotest_common.sh@930 -- # kill -0 3857946 00:25:30.479 01:46:43 -- common/autotest_common.sh@931 -- # uname 00:25:30.737 01:46:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:30.737 01:46:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3857946 00:25:30.737 01:46:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:30.737 01:46:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:30.737 01:46:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3857946' 00:25:30.737 killing process with pid 3857946 00:25:30.737 01:46:43 -- common/autotest_common.sh@945 -- # kill 3857946 00:25:30.737 [2024-07-23 01:46:43.601318] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:30.737 01:46:43 -- common/autotest_common.sh@950 -- # wait 3857946 00:25:30.737 01:46:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:30.737 01:46:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:30.737 01:46:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:30.737 01:46:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:30.737 01:46:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:30.737 01:46:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.737 01:46:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.737 01:46:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.275 01:46:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:33.275 00:25:33.275 real 0m5.864s 00:25:33.275 user 0m6.902s 00:25:33.275 sys 0m1.902s 00:25:33.275 01:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.275 01:46:45 -- common/autotest_common.sh@10 -- # set +x 00:25:33.275 ************************************ 00:25:33.275 END TEST nvmf_aer 00:25:33.275 ************************************ 00:25:33.275 01:46:45 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:33.275 01:46:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:33.275 01:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:33.275 01:46:45 -- common/autotest_common.sh@10 -- # set +x 00:25:33.275 ************************************ 00:25:33.275 START TEST nvmf_async_init 00:25:33.275 ************************************ 00:25:33.275 01:46:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:33.275 * Looking for test storage... 00:25:33.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:33.275 01:46:45 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:33.275 01:46:45 -- nvmf/common.sh@7 -- # uname -s 00:25:33.275 01:46:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:33.275 01:46:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:33.275 01:46:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:33.275 01:46:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:33.275 01:46:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:33.275 01:46:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:33.275 01:46:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:33.275 01:46:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:33.275 01:46:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:33.275 01:46:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:33.275 01:46:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:33.275 01:46:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:33.275 01:46:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:33.275 01:46:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:33.275 01:46:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:33.275 01:46:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:33.275 01:46:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:33.275 01:46:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:33.275 01:46:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:33.275 01:46:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.275 01:46:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.275 01:46:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.275 01:46:45 -- paths/export.sh@5 -- # export PATH 00:25:33.275 01:46:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:33.275 01:46:45 -- nvmf/common.sh@46 -- # : 0 00:25:33.275 01:46:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:33.275 01:46:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:33.275 01:46:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:33.275 01:46:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:33.275 01:46:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:33.275 01:46:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:33.275 01:46:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:33.275 01:46:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:33.275 01:46:45 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:33.275 01:46:45 -- host/async_init.sh@14 -- # null_block_size=512 00:25:33.275 01:46:45 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:33.275 01:46:45 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:33.275 01:46:45 -- host/async_init.sh@20 -- # uuidgen 00:25:33.275 01:46:45 -- host/async_init.sh@20 -- # tr -d - 00:25:33.275 01:46:45 -- host/async_init.sh@20 -- # nguid=dea370295cce4015ad64ce54d3690386 00:25:33.276 01:46:45 -- host/async_init.sh@22 -- # nvmftestinit 00:25:33.276 01:46:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:33.276 01:46:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:33.276 01:46:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:33.276 01:46:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:33.276 01:46:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:33.276 01:46:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.276 01:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.276 01:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:33.276 01:46:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:33.276 01:46:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:33.276 01:46:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:33.276 01:46:45 -- common/autotest_common.sh@10 -- # set +x 00:25:35.181 01:46:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:35.181 01:46:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:35.181 01:46:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:35.181 01:46:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:35.181 01:46:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:35.181 01:46:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:35.181 01:46:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:35.182 01:46:47 -- nvmf/common.sh@294 -- # net_devs=() 00:25:35.182 01:46:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:35.182 01:46:47 -- nvmf/common.sh@295 -- # e810=() 00:25:35.182 01:46:47 -- nvmf/common.sh@295 -- # local -ga e810 00:25:35.182 01:46:47 -- nvmf/common.sh@296 -- # x722=() 00:25:35.182 01:46:47 -- nvmf/common.sh@296 -- # local -ga x722 00:25:35.182 01:46:47 -- nvmf/common.sh@297 -- # mlx=() 00:25:35.182 01:46:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:35.182 01:46:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:35.182 01:46:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:35.182 01:46:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:35.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:35.182 01:46:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:35.182 01:46:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:35.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:35.182 01:46:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:35.182 01:46:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.182 01:46:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.182 01:46:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:35.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:35.182 01:46:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:35.182 01:46:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:35.182 01:46:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:35.182 01:46:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:35.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:35.182 01:46:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:35.182 01:46:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:35.182 01:46:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:35.182 01:46:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:35.182 01:46:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:35.182 01:46:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:35.182 01:46:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:35.182 01:46:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:35.182 01:46:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:35.182 01:46:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:35.182 01:46:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:35.182 01:46:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:35.182 01:46:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:35.182 01:46:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:35.182 01:46:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:35.182 01:46:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:35.182 01:46:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:35.182 01:46:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:35.182 01:46:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:35.182 01:46:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:35.182 01:46:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:35.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:35.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:25:35.182 00:25:35.182 --- 10.0.0.2 ping statistics --- 00:25:35.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.182 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:25:35.182 01:46:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:35.182 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:35.182 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:25:35.182 00:25:35.182 --- 10.0.0.1 ping statistics --- 00:25:35.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:35.182 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:25:35.182 01:46:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:35.182 01:46:47 -- nvmf/common.sh@410 -- # return 0 00:25:35.182 01:46:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:35.182 01:46:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:35.182 01:46:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:35.182 01:46:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:35.182 01:46:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:35.182 01:46:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:35.182 01:46:47 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:35.182 01:46:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:35.182 01:46:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:35.182 01:46:47 -- common/autotest_common.sh@10 -- # set +x 00:25:35.182 01:46:47 -- nvmf/common.sh@469 -- # nvmfpid=3860156 00:25:35.182 01:46:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:35.182 01:46:47 -- nvmf/common.sh@470 -- # waitforlisten 3860156 00:25:35.182 01:46:47 -- common/autotest_common.sh@819 -- # '[' -z 3860156 ']' 00:25:35.182 01:46:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.182 01:46:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:35.182 01:46:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.182 01:46:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:35.182 01:46:47 -- common/autotest_common.sh@10 -- # set +x 00:25:35.182 [2024-07-23 01:46:48.030947] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:35.182 [2024-07-23 01:46:48.031033] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:35.182 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.182 [2024-07-23 01:46:48.099350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.182 [2024-07-23 01:46:48.186455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:35.182 [2024-07-23 01:46:48.186653] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:35.182 [2024-07-23 01:46:48.186674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:35.182 [2024-07-23 01:46:48.186689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:35.182 [2024-07-23 01:46:48.186722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.117 01:46:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:36.117 01:46:48 -- common/autotest_common.sh@852 -- # return 0 00:25:36.117 01:46:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:36.117 01:46:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:36.117 01:46:48 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 01:46:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.117 01:46:49 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 [2024-07-23 01:46:49.020973] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 null0 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dea370295cce4015ad64ce54d3690386 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.117 [2024-07-23 01:46:49.061227] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:36.117 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.117 01:46:49 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:36.117 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.117 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 nvme0n1 00:25:36.377 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.377 01:46:49 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:36.377 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.377 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 [ 00:25:36.377 { 00:25:36.377 "name": "nvme0n1", 00:25:36.377 "aliases": [ 00:25:36.377 "dea37029-5cce-4015-ad64-ce54d3690386" 00:25:36.377 ], 00:25:36.377 "product_name": "NVMe disk", 00:25:36.377 "block_size": 512, 00:25:36.377 "num_blocks": 2097152, 00:25:36.377 "uuid": "dea37029-5cce-4015-ad64-ce54d3690386", 00:25:36.377 "assigned_rate_limits": { 00:25:36.377 "rw_ios_per_sec": 0, 00:25:36.377 "rw_mbytes_per_sec": 0, 00:25:36.377 "r_mbytes_per_sec": 0, 00:25:36.377 "w_mbytes_per_sec": 0 00:25:36.377 }, 00:25:36.377 "claimed": false, 00:25:36.377 "zoned": false, 00:25:36.377 "supported_io_types": { 00:25:36.377 "read": true, 00:25:36.377 "write": true, 00:25:36.377 "unmap": false, 00:25:36.377 "write_zeroes": true, 00:25:36.377 "flush": true, 00:25:36.377 "reset": true, 00:25:36.377 "compare": true, 00:25:36.377 "compare_and_write": true, 00:25:36.377 "abort": true, 00:25:36.377 "nvme_admin": true, 00:25:36.377 "nvme_io": true 00:25:36.377 }, 00:25:36.377 "driver_specific": { 00:25:36.377 "nvme": [ 00:25:36.377 { 00:25:36.377 "trid": { 00:25:36.377 "trtype": "TCP", 00:25:36.377 "adrfam": "IPv4", 00:25:36.377 "traddr": "10.0.0.2", 00:25:36.377 "trsvcid": "4420", 00:25:36.377 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:36.377 }, 00:25:36.377 "ctrlr_data": { 00:25:36.377 "cntlid": 1, 00:25:36.377 "vendor_id": "0x8086", 00:25:36.377 "model_number": "SPDK bdev Controller", 00:25:36.377 "serial_number": "00000000000000000000", 00:25:36.377 "firmware_revision": "24.01.1", 00:25:36.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.377 "oacs": { 00:25:36.377 "security": 0, 00:25:36.377 "format": 0, 00:25:36.377 "firmware": 0, 00:25:36.377 "ns_manage": 0 00:25:36.377 }, 00:25:36.377 "multi_ctrlr": true, 00:25:36.377 "ana_reporting": false 00:25:36.377 }, 00:25:36.377 "vs": { 00:25:36.377 "nvme_version": "1.3" 00:25:36.377 }, 00:25:36.377 "ns_data": { 00:25:36.377 "id": 1, 00:25:36.377 "can_share": true 00:25:36.377 } 00:25:36.377 } 00:25:36.377 ], 00:25:36.377 "mp_policy": "active_passive" 00:25:36.377 } 00:25:36.377 } 00:25:36.377 ] 00:25:36.377 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.377 01:46:49 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:36.377 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.377 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 [2024-07-23 01:46:49.309953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:36.377 [2024-07-23 01:46:49.310053] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x71d0b0 (9): Bad file descriptor 00:25:36.377 [2024-07-23 01:46:49.441769] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:36.377 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.377 01:46:49 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:36.377 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.377 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 [ 00:25:36.377 { 00:25:36.377 "name": "nvme0n1", 00:25:36.377 "aliases": [ 00:25:36.377 "dea37029-5cce-4015-ad64-ce54d3690386" 00:25:36.377 ], 00:25:36.377 "product_name": "NVMe disk", 00:25:36.377 "block_size": 512, 00:25:36.377 "num_blocks": 2097152, 00:25:36.377 "uuid": "dea37029-5cce-4015-ad64-ce54d3690386", 00:25:36.377 "assigned_rate_limits": { 00:25:36.377 "rw_ios_per_sec": 0, 00:25:36.377 "rw_mbytes_per_sec": 0, 00:25:36.377 "r_mbytes_per_sec": 0, 00:25:36.377 "w_mbytes_per_sec": 0 00:25:36.377 }, 00:25:36.377 "claimed": false, 00:25:36.377 "zoned": false, 00:25:36.377 "supported_io_types": { 00:25:36.377 "read": true, 00:25:36.377 "write": true, 00:25:36.377 "unmap": false, 00:25:36.377 "write_zeroes": true, 00:25:36.377 "flush": true, 00:25:36.377 "reset": true, 00:25:36.377 "compare": true, 00:25:36.377 "compare_and_write": true, 00:25:36.377 "abort": true, 00:25:36.377 "nvme_admin": true, 00:25:36.377 "nvme_io": true 00:25:36.377 }, 00:25:36.377 "driver_specific": { 00:25:36.377 "nvme": [ 00:25:36.377 { 00:25:36.377 "trid": { 00:25:36.377 "trtype": "TCP", 00:25:36.377 "adrfam": "IPv4", 00:25:36.377 "traddr": "10.0.0.2", 00:25:36.377 "trsvcid": "4420", 00:25:36.377 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:36.377 }, 00:25:36.377 "ctrlr_data": { 00:25:36.377 "cntlid": 2, 00:25:36.377 "vendor_id": "0x8086", 00:25:36.377 "model_number": "SPDK bdev Controller", 00:25:36.377 "serial_number": "00000000000000000000", 00:25:36.377 "firmware_revision": "24.01.1", 00:25:36.377 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.377 "oacs": { 00:25:36.377 "security": 0, 00:25:36.377 "format": 0, 00:25:36.377 "firmware": 0, 00:25:36.377 "ns_manage": 0 00:25:36.377 }, 00:25:36.377 "multi_ctrlr": true, 00:25:36.377 "ana_reporting": false 00:25:36.377 }, 00:25:36.377 "vs": { 00:25:36.377 "nvme_version": "1.3" 00:25:36.377 }, 00:25:36.377 "ns_data": { 00:25:36.377 "id": 1, 00:25:36.377 "can_share": true 00:25:36.377 } 00:25:36.377 } 00:25:36.377 ], 00:25:36.377 "mp_policy": "active_passive" 00:25:36.377 } 00:25:36.377 } 00:25:36.377 ] 00:25:36.377 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.377 01:46:49 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.377 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.377 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.377 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.377 01:46:49 -- host/async_init.sh@53 -- # mktemp 00:25:36.377 01:46:49 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.7PRFtipnzU 00:25:36.377 01:46:49 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:36.637 01:46:49 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.7PRFtipnzU 00:25:36.637 01:46:49 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.637 01:46:49 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:36.637 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 [2024-07-23 01:46:49.490585] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:36.637 [2024-07-23 01:46:49.490781] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:36.637 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.637 01:46:49 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7PRFtipnzU 00:25:36.637 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.637 01:46:49 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.7PRFtipnzU 00:25:36.637 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 [2024-07-23 01:46:49.506607] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:36.637 nvme0n1 00:25:36.637 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.637 01:46:49 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:36.637 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.637 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.637 [ 00:25:36.637 { 00:25:36.637 "name": "nvme0n1", 00:25:36.637 "aliases": [ 00:25:36.637 "dea37029-5cce-4015-ad64-ce54d3690386" 00:25:36.637 ], 00:25:36.637 "product_name": "NVMe disk", 00:25:36.637 "block_size": 512, 00:25:36.637 "num_blocks": 2097152, 00:25:36.637 "uuid": "dea37029-5cce-4015-ad64-ce54d3690386", 00:25:36.637 "assigned_rate_limits": { 00:25:36.637 "rw_ios_per_sec": 0, 00:25:36.637 "rw_mbytes_per_sec": 0, 00:25:36.637 "r_mbytes_per_sec": 0, 00:25:36.637 "w_mbytes_per_sec": 0 00:25:36.637 }, 00:25:36.637 "claimed": false, 00:25:36.637 "zoned": false, 00:25:36.637 "supported_io_types": { 00:25:36.637 "read": true, 00:25:36.637 "write": true, 00:25:36.637 "unmap": false, 00:25:36.637 "write_zeroes": true, 00:25:36.637 "flush": true, 00:25:36.637 "reset": true, 00:25:36.637 "compare": true, 00:25:36.637 "compare_and_write": true, 00:25:36.637 "abort": true, 00:25:36.638 "nvme_admin": true, 00:25:36.638 "nvme_io": true 00:25:36.638 }, 00:25:36.638 "driver_specific": { 00:25:36.638 "nvme": [ 00:25:36.638 { 00:25:36.638 "trid": { 00:25:36.638 "trtype": "TCP", 00:25:36.638 "adrfam": "IPv4", 00:25:36.638 "traddr": "10.0.0.2", 00:25:36.638 "trsvcid": "4421", 00:25:36.638 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:36.638 }, 00:25:36.638 "ctrlr_data": { 00:25:36.638 "cntlid": 3, 00:25:36.638 "vendor_id": "0x8086", 00:25:36.638 "model_number": "SPDK bdev Controller", 00:25:36.638 "serial_number": "00000000000000000000", 00:25:36.638 "firmware_revision": "24.01.1", 00:25:36.638 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:36.638 "oacs": { 00:25:36.638 "security": 0, 00:25:36.638 "format": 0, 00:25:36.638 "firmware": 0, 00:25:36.638 "ns_manage": 0 00:25:36.638 }, 00:25:36.638 "multi_ctrlr": true, 00:25:36.638 "ana_reporting": false 00:25:36.638 }, 00:25:36.638 "vs": { 00:25:36.638 "nvme_version": "1.3" 00:25:36.638 }, 00:25:36.638 "ns_data": { 00:25:36.638 "id": 1, 00:25:36.638 "can_share": true 00:25:36.638 } 00:25:36.638 } 00:25:36.638 ], 00:25:36.638 "mp_policy": "active_passive" 00:25:36.638 } 00:25:36.638 } 00:25:36.638 ] 00:25:36.638 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.638 01:46:49 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.638 01:46:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:36.638 01:46:49 -- common/autotest_common.sh@10 -- # set +x 00:25:36.638 01:46:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:36.638 01:46:49 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.7PRFtipnzU 00:25:36.638 01:46:49 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:36.638 01:46:49 -- host/async_init.sh@78 -- # nvmftestfini 00:25:36.638 01:46:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:36.638 01:46:49 -- nvmf/common.sh@116 -- # sync 00:25:36.638 01:46:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:36.638 01:46:49 -- nvmf/common.sh@119 -- # set +e 00:25:36.638 01:46:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:36.638 01:46:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:36.638 rmmod nvme_tcp 00:25:36.638 rmmod nvme_fabrics 00:25:36.638 rmmod nvme_keyring 00:25:36.638 01:46:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:36.638 01:46:49 -- nvmf/common.sh@123 -- # set -e 00:25:36.638 01:46:49 -- nvmf/common.sh@124 -- # return 0 00:25:36.638 01:46:49 -- nvmf/common.sh@477 -- # '[' -n 3860156 ']' 00:25:36.638 01:46:49 -- nvmf/common.sh@478 -- # killprocess 3860156 00:25:36.638 01:46:49 -- common/autotest_common.sh@926 -- # '[' -z 3860156 ']' 00:25:36.638 01:46:49 -- common/autotest_common.sh@930 -- # kill -0 3860156 00:25:36.638 01:46:49 -- common/autotest_common.sh@931 -- # uname 00:25:36.638 01:46:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:36.638 01:46:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3860156 00:25:36.638 01:46:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:36.638 01:46:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:36.638 01:46:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3860156' 00:25:36.638 killing process with pid 3860156 00:25:36.638 01:46:49 -- common/autotest_common.sh@945 -- # kill 3860156 00:25:36.638 01:46:49 -- common/autotest_common.sh@950 -- # wait 3860156 00:25:36.898 01:46:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:36.898 01:46:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:36.898 01:46:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:36.898 01:46:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:36.898 01:46:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:36.898 01:46:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.898 01:46:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.898 01:46:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.437 01:46:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:39.437 00:25:39.437 real 0m6.073s 00:25:39.437 user 0m2.886s 00:25:39.437 sys 0m1.804s 00:25:39.437 01:46:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.437 01:46:51 -- common/autotest_common.sh@10 -- # set +x 00:25:39.437 ************************************ 00:25:39.437 END TEST nvmf_async_init 00:25:39.438 ************************************ 00:25:39.438 01:46:51 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:39.438 01:46:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:39.438 01:46:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.438 01:46:51 -- common/autotest_common.sh@10 -- # set +x 00:25:39.438 ************************************ 00:25:39.438 START TEST dma 00:25:39.438 ************************************ 00:25:39.438 01:46:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:39.438 * Looking for test storage... 00:25:39.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.438 01:46:52 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.438 01:46:52 -- nvmf/common.sh@7 -- # uname -s 00:25:39.438 01:46:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.438 01:46:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.438 01:46:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.438 01:46:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.438 01:46:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.438 01:46:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.438 01:46:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.438 01:46:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.438 01:46:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.438 01:46:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.438 01:46:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.438 01:46:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.438 01:46:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.438 01:46:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.438 01:46:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.438 01:46:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.438 01:46:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.438 01:46:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.438 01:46:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.438 01:46:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@5 -- # export PATH 00:25:39.438 01:46:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- nvmf/common.sh@46 -- # : 0 00:25:39.438 01:46:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:39.438 01:46:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:39.438 01:46:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.438 01:46:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.438 01:46:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:39.438 01:46:52 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:39.438 01:46:52 -- host/dma.sh@13 -- # exit 0 00:25:39.438 00:25:39.438 real 0m0.069s 00:25:39.438 user 0m0.028s 00:25:39.438 sys 0m0.047s 00:25:39.438 01:46:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:39.438 01:46:52 -- common/autotest_common.sh@10 -- # set +x 00:25:39.438 ************************************ 00:25:39.438 END TEST dma 00:25:39.438 ************************************ 00:25:39.438 01:46:52 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:39.438 01:46:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:39.438 01:46:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:39.438 01:46:52 -- common/autotest_common.sh@10 -- # set +x 00:25:39.438 ************************************ 00:25:39.438 START TEST nvmf_identify 00:25:39.438 ************************************ 00:25:39.438 01:46:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:39.438 * Looking for test storage... 00:25:39.438 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.438 01:46:52 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.438 01:46:52 -- nvmf/common.sh@7 -- # uname -s 00:25:39.438 01:46:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.438 01:46:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.438 01:46:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.438 01:46:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.438 01:46:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.438 01:46:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.438 01:46:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.438 01:46:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.438 01:46:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.438 01:46:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.438 01:46:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.438 01:46:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.438 01:46:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.438 01:46:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.438 01:46:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.438 01:46:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.438 01:46:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.438 01:46:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.438 01:46:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.438 01:46:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- paths/export.sh@5 -- # export PATH 00:25:39.438 01:46:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.438 01:46:52 -- nvmf/common.sh@46 -- # : 0 00:25:39.438 01:46:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:39.438 01:46:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:39.438 01:46:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.438 01:46:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.438 01:46:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:39.438 01:46:52 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:39.438 01:46:52 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:39.438 01:46:52 -- host/identify.sh@14 -- # nvmftestinit 00:25:39.438 01:46:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:39.438 01:46:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.438 01:46:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:39.438 01:46:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:39.438 01:46:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:39.439 01:46:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.439 01:46:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.439 01:46:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.439 01:46:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:39.439 01:46:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:39.439 01:46:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:39.439 01:46:52 -- common/autotest_common.sh@10 -- # set +x 00:25:41.344 01:46:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:41.344 01:46:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:41.344 01:46:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:41.344 01:46:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:41.344 01:46:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:41.344 01:46:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:41.344 01:46:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:41.344 01:46:54 -- nvmf/common.sh@294 -- # net_devs=() 00:25:41.344 01:46:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:41.344 01:46:54 -- nvmf/common.sh@295 -- # e810=() 00:25:41.344 01:46:54 -- nvmf/common.sh@295 -- # local -ga e810 00:25:41.344 01:46:54 -- nvmf/common.sh@296 -- # x722=() 00:25:41.344 01:46:54 -- nvmf/common.sh@296 -- # local -ga x722 00:25:41.344 01:46:54 -- nvmf/common.sh@297 -- # mlx=() 00:25:41.344 01:46:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:41.344 01:46:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.344 01:46:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:41.344 01:46:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:41.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:41.344 01:46:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:41.344 01:46:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:41.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:41.344 01:46:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:41.344 01:46:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.344 01:46:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.344 01:46:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:41.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:41.344 01:46:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:41.344 01:46:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.344 01:46:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.344 01:46:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:41.344 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:41.344 01:46:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:41.344 01:46:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:41.344 01:46:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.344 01:46:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.344 01:46:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:41.344 01:46:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.344 01:46:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.344 01:46:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:41.344 01:46:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.344 01:46:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.344 01:46:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:41.344 01:46:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:41.344 01:46:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.344 01:46:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.344 01:46:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.344 01:46:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.344 01:46:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:41.344 01:46:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.344 01:46:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.344 01:46:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.344 01:46:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:41.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:25:41.344 00:25:41.344 --- 10.0.0.2 ping statistics --- 00:25:41.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.344 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:25:41.344 01:46:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:25:41.344 00:25:41.344 --- 10.0.0.1 ping statistics --- 00:25:41.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.344 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:41.344 01:46:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.344 01:46:54 -- nvmf/common.sh@410 -- # return 0 00:25:41.344 01:46:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:41.344 01:46:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.344 01:46:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:41.344 01:46:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.344 01:46:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:41.344 01:46:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:41.344 01:46:54 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:41.344 01:46:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:41.344 01:46:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.344 01:46:54 -- host/identify.sh@19 -- # nvmfpid=3862298 00:25:41.344 01:46:54 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:41.344 01:46:54 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.344 01:46:54 -- host/identify.sh@23 -- # waitforlisten 3862298 00:25:41.344 01:46:54 -- common/autotest_common.sh@819 -- # '[' -z 3862298 ']' 00:25:41.344 01:46:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.344 01:46:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:41.344 01:46:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.344 01:46:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:41.344 01:46:54 -- common/autotest_common.sh@10 -- # set +x 00:25:41.344 [2024-07-23 01:46:54.237015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:41.344 [2024-07-23 01:46:54.237096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:41.344 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.344 [2024-07-23 01:46:54.305068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:41.344 [2024-07-23 01:46:54.396249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:41.345 [2024-07-23 01:46:54.396398] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:41.345 [2024-07-23 01:46:54.396414] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:41.345 [2024-07-23 01:46:54.396427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:41.345 [2024-07-23 01:46:54.396497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.345 [2024-07-23 01:46:54.397634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:41.345 [2024-07-23 01:46:54.397704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:41.345 [2024-07-23 01:46:54.397708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.279 01:46:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:42.279 01:46:55 -- common/autotest_common.sh@852 -- # return 0 00:25:42.279 01:46:55 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 [2024-07-23 01:46:55.231283] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.279 01:46:55 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:42.279 01:46:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 01:46:55 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 Malloc0 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.279 01:46:55 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.279 01:46:55 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.279 01:46:55 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 [2024-07-23 01:46:55.312945] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.279 01:46:55 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:42.279 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.279 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.280 01:46:55 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:42.280 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.280 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.280 [2024-07-23 01:46:55.328690] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:42.280 [ 00:25:42.280 { 00:25:42.280 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:42.280 "subtype": "Discovery", 00:25:42.280 "listen_addresses": [ 00:25:42.280 { 00:25:42.280 "transport": "TCP", 00:25:42.280 "trtype": "TCP", 00:25:42.280 "adrfam": "IPv4", 00:25:42.280 "traddr": "10.0.0.2", 00:25:42.280 "trsvcid": "4420" 00:25:42.280 } 00:25:42.280 ], 00:25:42.280 "allow_any_host": true, 00:25:42.280 "hosts": [] 00:25:42.280 }, 00:25:42.280 { 00:25:42.280 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.280 "subtype": "NVMe", 00:25:42.280 "listen_addresses": [ 00:25:42.280 { 00:25:42.280 "transport": "TCP", 00:25:42.280 "trtype": "TCP", 00:25:42.280 "adrfam": "IPv4", 00:25:42.280 "traddr": "10.0.0.2", 00:25:42.280 "trsvcid": "4420" 00:25:42.280 } 00:25:42.280 ], 00:25:42.280 "allow_any_host": true, 00:25:42.280 "hosts": [], 00:25:42.280 "serial_number": "SPDK00000000000001", 00:25:42.280 "model_number": "SPDK bdev Controller", 00:25:42.280 "max_namespaces": 32, 00:25:42.280 "min_cntlid": 1, 00:25:42.280 "max_cntlid": 65519, 00:25:42.280 "namespaces": [ 00:25:42.280 { 00:25:42.280 "nsid": 1, 00:25:42.280 "bdev_name": "Malloc0", 00:25:42.280 "name": "Malloc0", 00:25:42.280 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:42.280 "eui64": "ABCDEF0123456789", 00:25:42.280 "uuid": "eb097cfd-d3b1-4823-b71f-e39969d2825c" 00:25:42.280 } 00:25:42.280 ] 00:25:42.280 } 00:25:42.280 ] 00:25:42.280 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.280 01:46:55 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:42.280 [2024-07-23 01:46:55.353100] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:42.280 [2024-07-23 01:46:55.353143] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862464 ] 00:25:42.280 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.544 [2024-07-23 01:46:55.389935] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:42.544 [2024-07-23 01:46:55.389997] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:42.544 [2024-07-23 01:46:55.390007] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:42.544 [2024-07-23 01:46:55.390024] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:42.544 [2024-07-23 01:46:55.390037] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:42.544 [2024-07-23 01:46:55.390365] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:42.544 [2024-07-23 01:46:55.390424] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x20695a0 0 00:25:42.544 [2024-07-23 01:46:55.396634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:42.544 [2024-07-23 01:46:55.396665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:42.544 [2024-07-23 01:46:55.396674] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:42.544 [2024-07-23 01:46:55.396680] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:42.544 [2024-07-23 01:46:55.396727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.396739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.396746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.396764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:42.544 [2024-07-23 01:46:55.396789] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.405628] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.405646] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.405654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.405662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.405679] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:42.544 [2024-07-23 01:46:55.405706] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:42.544 [2024-07-23 01:46:55.405715] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:42.544 [2024-07-23 01:46:55.405734] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.405743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.405749] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.405760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.405784] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.405989] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.406001] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.406008] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406015] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.406031] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:42.544 [2024-07-23 01:46:55.406045] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:42.544 [2024-07-23 01:46:55.406058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.406098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.406119] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.406312] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.406328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.406335] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406342] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.406352] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:42.544 [2024-07-23 01:46:55.406366] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.406379] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406393] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.406404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.406424] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.406574] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.406588] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.406595] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.406620] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.406639] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406649] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406655] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.406666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.406687] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.406836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.406851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.406858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.406865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.406875] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:42.544 [2024-07-23 01:46:55.406888] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.406902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.407012] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:42.544 [2024-07-23 01:46:55.407021] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.407035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407043] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407049] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.407059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.407080] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.407257] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.407270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.407277] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407283] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.407293] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:42.544 [2024-07-23 01:46:55.407310] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407319] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407325] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.544 [2024-07-23 01:46:55.407336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.544 [2024-07-23 01:46:55.407356] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.544 [2024-07-23 01:46:55.407491] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.544 [2024-07-23 01:46:55.407503] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.544 [2024-07-23 01:46:55.407510] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.544 [2024-07-23 01:46:55.407517] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.544 [2024-07-23 01:46:55.407526] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:42.544 [2024-07-23 01:46:55.407534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:42.544 [2024-07-23 01:46:55.407547] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:42.545 [2024-07-23 01:46:55.407567] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:42.545 [2024-07-23 01:46:55.407582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407589] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.407606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.545 [2024-07-23 01:46:55.407641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.545 [2024-07-23 01:46:55.407851] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.545 [2024-07-23 01:46:55.407863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.545 [2024-07-23 01:46:55.407870] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407877] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20695a0): datao=0, datal=4096, cccid=0 00:25:42.545 [2024-07-23 01:46:55.407885] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d43e0) on tqpair(0x20695a0): expected_datao=0, payload_size=4096 00:25:42.545 [2024-07-23 01:46:55.407897] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407905] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407955] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.545 [2024-07-23 01:46:55.407966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.545 [2024-07-23 01:46:55.407973] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.407979] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.545 [2024-07-23 01:46:55.407993] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:42.545 [2024-07-23 01:46:55.408002] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:42.545 [2024-07-23 01:46:55.408009] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:42.545 [2024-07-23 01:46:55.408018] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:42.545 [2024-07-23 01:46:55.408026] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:42.545 [2024-07-23 01:46:55.408034] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:42.545 [2024-07-23 01:46:55.408053] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:42.545 [2024-07-23 01:46:55.408066] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408074] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408080] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408091] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:42.545 [2024-07-23 01:46:55.408112] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.545 [2024-07-23 01:46:55.408307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.545 [2024-07-23 01:46:55.408319] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.545 [2024-07-23 01:46:55.408326] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408333] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d43e0) on tqpair=0x20695a0 00:25:42.545 [2024-07-23 01:46:55.408346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408354] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408360] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.545 [2024-07-23 01:46:55.408380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408387] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408397] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.545 [2024-07-23 01:46:55.408416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408422] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.545 [2024-07-23 01:46:55.408463] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408470] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408476] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.545 [2024-07-23 01:46:55.408492] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:42.545 [2024-07-23 01:46:55.408511] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:42.545 [2024-07-23 01:46:55.408523] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408530] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408536] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.545 [2024-07-23 01:46:55.408568] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d43e0, cid 0, qid 0 00:25:42.545 [2024-07-23 01:46:55.408594] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4540, cid 1, qid 0 00:25:42.545 [2024-07-23 01:46:55.408603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d46a0, cid 2, qid 0 00:25:42.545 [2024-07-23 01:46:55.408611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4800, cid 3, qid 0 00:25:42.545 [2024-07-23 01:46:55.408629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4960, cid 4, qid 0 00:25:42.545 [2024-07-23 01:46:55.408829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.545 [2024-07-23 01:46:55.408844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.545 [2024-07-23 01:46:55.408851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4960) on tqpair=0x20695a0 00:25:42.545 [2024-07-23 01:46:55.408868] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:42.545 [2024-07-23 01:46:55.408877] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:42.545 [2024-07-23 01:46:55.408894] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408903] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.408910] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.408920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.545 [2024-07-23 01:46:55.408954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4960, cid 4, qid 0 00:25:42.545 [2024-07-23 01:46:55.409176] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.545 [2024-07-23 01:46:55.409194] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.545 [2024-07-23 01:46:55.409201] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.409208] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20695a0): datao=0, datal=4096, cccid=4 00:25:42.545 [2024-07-23 01:46:55.409215] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4960) on tqpair(0x20695a0): expected_datao=0, payload_size=4096 00:25:42.545 [2024-07-23 01:46:55.409238] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.409248] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.545 [2024-07-23 01:46:55.453657] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.545 [2024-07-23 01:46:55.453665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4960) on tqpair=0x20695a0 00:25:42.545 [2024-07-23 01:46:55.453694] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:42.545 [2024-07-23 01:46:55.453746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453764] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.453776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.545 [2024-07-23 01:46:55.453788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453795] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.545 [2024-07-23 01:46:55.453801] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x20695a0) 00:25:42.545 [2024-07-23 01:46:55.453812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.545 [2024-07-23 01:46:55.453839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4960, cid 4, qid 0 00:25:42.545 [2024-07-23 01:46:55.453852] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4ac0, cid 5, qid 0 00:25:42.545 [2024-07-23 01:46:55.454062] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.545 [2024-07-23 01:46:55.454075] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.546 [2024-07-23 01:46:55.454082] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.454089] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20695a0): datao=0, datal=1024, cccid=4 00:25:42.546 [2024-07-23 01:46:55.454097] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4960) on tqpair(0x20695a0): expected_datao=0, payload_size=1024 00:25:42.546 [2024-07-23 01:46:55.454107] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.454115] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.454124] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.546 [2024-07-23 01:46:55.454148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.546 [2024-07-23 01:46:55.454154] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.454161] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4ac0) on tqpair=0x20695a0 00:25:42.546 [2024-07-23 01:46:55.494756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.546 [2024-07-23 01:46:55.494774] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.546 [2024-07-23 01:46:55.494781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.494788] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4960) on tqpair=0x20695a0 00:25:42.546 [2024-07-23 01:46:55.494811] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.494822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.494828] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20695a0) 00:25:42.546 [2024-07-23 01:46:55.494839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.546 [2024-07-23 01:46:55.494868] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4960, cid 4, qid 0 00:25:42.546 [2024-07-23 01:46:55.495016] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.546 [2024-07-23 01:46:55.495028] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.546 [2024-07-23 01:46:55.495035] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495041] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20695a0): datao=0, datal=3072, cccid=4 00:25:42.546 [2024-07-23 01:46:55.495049] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4960) on tqpair(0x20695a0): expected_datao=0, payload_size=3072 00:25:42.546 [2024-07-23 01:46:55.495077] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495086] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.546 [2024-07-23 01:46:55.495186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.546 [2024-07-23 01:46:55.495193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495200] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4960) on tqpair=0x20695a0 00:25:42.546 [2024-07-23 01:46:55.495216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495224] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x20695a0) 00:25:42.546 [2024-07-23 01:46:55.495241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.546 [2024-07-23 01:46:55.495268] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4960, cid 4, qid 0 00:25:42.546 [2024-07-23 01:46:55.495418] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.546 [2024-07-23 01:46:55.495430] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.546 [2024-07-23 01:46:55.495437] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495443] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x20695a0): datao=0, datal=8, cccid=4 00:25:42.546 [2024-07-23 01:46:55.495451] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20d4960) on tqpair(0x20695a0): expected_datao=0, payload_size=8 00:25:42.546 [2024-07-23 01:46:55.495461] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.495468] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.535767] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.546 [2024-07-23 01:46:55.535786] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.546 [2024-07-23 01:46:55.535794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.546 [2024-07-23 01:46:55.535800] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4960) on tqpair=0x20695a0 00:25:42.546 ===================================================== 00:25:42.546 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:42.546 ===================================================== 00:25:42.546 Controller Capabilities/Features 00:25:42.546 ================================ 00:25:42.546 Vendor ID: 0000 00:25:42.546 Subsystem Vendor ID: 0000 00:25:42.546 Serial Number: .................... 00:25:42.546 Model Number: ........................................ 00:25:42.546 Firmware Version: 24.01.1 00:25:42.546 Recommended Arb Burst: 0 00:25:42.546 IEEE OUI Identifier: 00 00 00 00:25:42.546 Multi-path I/O 00:25:42.546 May have multiple subsystem ports: No 00:25:42.546 May have multiple controllers: No 00:25:42.546 Associated with SR-IOV VF: No 00:25:42.546 Max Data Transfer Size: 131072 00:25:42.546 Max Number of Namespaces: 0 00:25:42.546 Max Number of I/O Queues: 1024 00:25:42.546 NVMe Specification Version (VS): 1.3 00:25:42.546 NVMe Specification Version (Identify): 1.3 00:25:42.546 Maximum Queue Entries: 128 00:25:42.546 Contiguous Queues Required: Yes 00:25:42.546 Arbitration Mechanisms Supported 00:25:42.546 Weighted Round Robin: Not Supported 00:25:42.546 Vendor Specific: Not Supported 00:25:42.546 Reset Timeout: 15000 ms 00:25:42.546 Doorbell Stride: 4 bytes 00:25:42.546 NVM Subsystem Reset: Not Supported 00:25:42.546 Command Sets Supported 00:25:42.546 NVM Command Set: Supported 00:25:42.546 Boot Partition: Not Supported 00:25:42.546 Memory Page Size Minimum: 4096 bytes 00:25:42.546 Memory Page Size Maximum: 4096 bytes 00:25:42.546 Persistent Memory Region: Not Supported 00:25:42.546 Optional Asynchronous Events Supported 00:25:42.546 Namespace Attribute Notices: Not Supported 00:25:42.546 Firmware Activation Notices: Not Supported 00:25:42.546 ANA Change Notices: Not Supported 00:25:42.546 PLE Aggregate Log Change Notices: Not Supported 00:25:42.546 LBA Status Info Alert Notices: Not Supported 00:25:42.546 EGE Aggregate Log Change Notices: Not Supported 00:25:42.546 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.546 Zone Descriptor Change Notices: Not Supported 00:25:42.546 Discovery Log Change Notices: Supported 00:25:42.546 Controller Attributes 00:25:42.546 128-bit Host Identifier: Not Supported 00:25:42.546 Non-Operational Permissive Mode: Not Supported 00:25:42.546 NVM Sets: Not Supported 00:25:42.546 Read Recovery Levels: Not Supported 00:25:42.546 Endurance Groups: Not Supported 00:25:42.546 Predictable Latency Mode: Not Supported 00:25:42.546 Traffic Based Keep ALive: Not Supported 00:25:42.546 Namespace Granularity: Not Supported 00:25:42.546 SQ Associations: Not Supported 00:25:42.546 UUID List: Not Supported 00:25:42.546 Multi-Domain Subsystem: Not Supported 00:25:42.546 Fixed Capacity Management: Not Supported 00:25:42.546 Variable Capacity Management: Not Supported 00:25:42.546 Delete Endurance Group: Not Supported 00:25:42.546 Delete NVM Set: Not Supported 00:25:42.546 Extended LBA Formats Supported: Not Supported 00:25:42.546 Flexible Data Placement Supported: Not Supported 00:25:42.546 00:25:42.546 Controller Memory Buffer Support 00:25:42.546 ================================ 00:25:42.546 Supported: No 00:25:42.546 00:25:42.546 Persistent Memory Region Support 00:25:42.546 ================================ 00:25:42.546 Supported: No 00:25:42.546 00:25:42.546 Admin Command Set Attributes 00:25:42.546 ============================ 00:25:42.546 Security Send/Receive: Not Supported 00:25:42.546 Format NVM: Not Supported 00:25:42.546 Firmware Activate/Download: Not Supported 00:25:42.546 Namespace Management: Not Supported 00:25:42.546 Device Self-Test: Not Supported 00:25:42.546 Directives: Not Supported 00:25:42.546 NVMe-MI: Not Supported 00:25:42.546 Virtualization Management: Not Supported 00:25:42.546 Doorbell Buffer Config: Not Supported 00:25:42.546 Get LBA Status Capability: Not Supported 00:25:42.546 Command & Feature Lockdown Capability: Not Supported 00:25:42.546 Abort Command Limit: 1 00:25:42.546 Async Event Request Limit: 4 00:25:42.546 Number of Firmware Slots: N/A 00:25:42.546 Firmware Slot 1 Read-Only: N/A 00:25:42.546 Firmware Activation Without Reset: N/A 00:25:42.546 Multiple Update Detection Support: N/A 00:25:42.546 Firmware Update Granularity: No Information Provided 00:25:42.546 Per-Namespace SMART Log: No 00:25:42.546 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.546 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:42.546 Command Effects Log Page: Not Supported 00:25:42.546 Get Log Page Extended Data: Supported 00:25:42.546 Telemetry Log Pages: Not Supported 00:25:42.546 Persistent Event Log Pages: Not Supported 00:25:42.546 Supported Log Pages Log Page: May Support 00:25:42.546 Commands Supported & Effects Log Page: Not Supported 00:25:42.547 Feature Identifiers & Effects Log Page:May Support 00:25:42.547 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.547 Data Area 4 for Telemetry Log: Not Supported 00:25:42.547 Error Log Page Entries Supported: 128 00:25:42.547 Keep Alive: Not Supported 00:25:42.547 00:25:42.547 NVM Command Set Attributes 00:25:42.547 ========================== 00:25:42.547 Submission Queue Entry Size 00:25:42.547 Max: 1 00:25:42.547 Min: 1 00:25:42.547 Completion Queue Entry Size 00:25:42.547 Max: 1 00:25:42.547 Min: 1 00:25:42.547 Number of Namespaces: 0 00:25:42.547 Compare Command: Not Supported 00:25:42.547 Write Uncorrectable Command: Not Supported 00:25:42.547 Dataset Management Command: Not Supported 00:25:42.547 Write Zeroes Command: Not Supported 00:25:42.547 Set Features Save Field: Not Supported 00:25:42.547 Reservations: Not Supported 00:25:42.547 Timestamp: Not Supported 00:25:42.547 Copy: Not Supported 00:25:42.547 Volatile Write Cache: Not Present 00:25:42.547 Atomic Write Unit (Normal): 1 00:25:42.547 Atomic Write Unit (PFail): 1 00:25:42.547 Atomic Compare & Write Unit: 1 00:25:42.547 Fused Compare & Write: Supported 00:25:42.547 Scatter-Gather List 00:25:42.547 SGL Command Set: Supported 00:25:42.547 SGL Keyed: Supported 00:25:42.547 SGL Bit Bucket Descriptor: Not Supported 00:25:42.547 SGL Metadata Pointer: Not Supported 00:25:42.547 Oversized SGL: Not Supported 00:25:42.547 SGL Metadata Address: Not Supported 00:25:42.547 SGL Offset: Supported 00:25:42.547 Transport SGL Data Block: Not Supported 00:25:42.547 Replay Protected Memory Block: Not Supported 00:25:42.547 00:25:42.547 Firmware Slot Information 00:25:42.547 ========================= 00:25:42.547 Active slot: 0 00:25:42.547 00:25:42.547 00:25:42.547 Error Log 00:25:42.547 ========= 00:25:42.547 00:25:42.547 Active Namespaces 00:25:42.547 ================= 00:25:42.547 Discovery Log Page 00:25:42.547 ================== 00:25:42.547 Generation Counter: 2 00:25:42.547 Number of Records: 2 00:25:42.547 Record Format: 0 00:25:42.547 00:25:42.547 Discovery Log Entry 0 00:25:42.547 ---------------------- 00:25:42.547 Transport Type: 3 (TCP) 00:25:42.547 Address Family: 1 (IPv4) 00:25:42.547 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:42.547 Entry Flags: 00:25:42.547 Duplicate Returned Information: 1 00:25:42.547 Explicit Persistent Connection Support for Discovery: 1 00:25:42.547 Transport Requirements: 00:25:42.547 Secure Channel: Not Required 00:25:42.547 Port ID: 0 (0x0000) 00:25:42.547 Controller ID: 65535 (0xffff) 00:25:42.547 Admin Max SQ Size: 128 00:25:42.547 Transport Service Identifier: 4420 00:25:42.547 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:42.547 Transport Address: 10.0.0.2 00:25:42.547 Discovery Log Entry 1 00:25:42.547 ---------------------- 00:25:42.547 Transport Type: 3 (TCP) 00:25:42.547 Address Family: 1 (IPv4) 00:25:42.547 Subsystem Type: 2 (NVM Subsystem) 00:25:42.547 Entry Flags: 00:25:42.547 Duplicate Returned Information: 0 00:25:42.547 Explicit Persistent Connection Support for Discovery: 0 00:25:42.547 Transport Requirements: 00:25:42.547 Secure Channel: Not Required 00:25:42.547 Port ID: 0 (0x0000) 00:25:42.547 Controller ID: 65535 (0xffff) 00:25:42.547 Admin Max SQ Size: 128 00:25:42.547 Transport Service Identifier: 4420 00:25:42.547 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:42.547 Transport Address: 10.0.0.2 [2024-07-23 01:46:55.535912] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:42.547 [2024-07-23 01:46:55.535936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.547 [2024-07-23 01:46:55.535949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.547 [2024-07-23 01:46:55.535959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.547 [2024-07-23 01:46:55.535972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.547 [2024-07-23 01:46:55.535986] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.535994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536001] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20695a0) 00:25:42.547 [2024-07-23 01:46:55.536012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.547 [2024-07-23 01:46:55.536036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4800, cid 3, qid 0 00:25:42.547 [2024-07-23 01:46:55.536264] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.547 [2024-07-23 01:46:55.536277] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.547 [2024-07-23 01:46:55.536285] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4800) on tqpair=0x20695a0 00:25:42.547 [2024-07-23 01:46:55.536304] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536312] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536318] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20695a0) 00:25:42.547 [2024-07-23 01:46:55.536329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.547 [2024-07-23 01:46:55.536354] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4800, cid 3, qid 0 00:25:42.547 [2024-07-23 01:46:55.536512] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.547 [2024-07-23 01:46:55.536524] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.547 [2024-07-23 01:46:55.536531] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536537] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4800) on tqpair=0x20695a0 00:25:42.547 [2024-07-23 01:46:55.536547] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:42.547 [2024-07-23 01:46:55.536556] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:42.547 [2024-07-23 01:46:55.536571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.536586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20695a0) 00:25:42.547 [2024-07-23 01:46:55.536597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.547 [2024-07-23 01:46:55.540639] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4800, cid 3, qid 0 00:25:42.547 [2024-07-23 01:46:55.540668] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.547 [2024-07-23 01:46:55.540679] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.547 [2024-07-23 01:46:55.540699] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.540706] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4800) on tqpair=0x20695a0 00:25:42.547 [2024-07-23 01:46:55.540726] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.540736] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.540742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x20695a0) 00:25:42.547 [2024-07-23 01:46:55.540753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.547 [2024-07-23 01:46:55.540775] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20d4800, cid 3, qid 0 00:25:42.547 [2024-07-23 01:46:55.540928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.547 [2024-07-23 01:46:55.540943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.547 [2024-07-23 01:46:55.540950] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.547 [2024-07-23 01:46:55.540957] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20d4800) on tqpair=0x20695a0 00:25:42.547 [2024-07-23 01:46:55.540972] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:25:42.547 00:25:42.547 01:46:55 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:42.547 [2024-07-23 01:46:55.573501] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:42.547 [2024-07-23 01:46:55.573547] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862471 ] 00:25:42.547 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.547 [2024-07-23 01:46:55.609395] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:42.547 [2024-07-23 01:46:55.609445] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:42.547 [2024-07-23 01:46:55.609455] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:42.547 [2024-07-23 01:46:55.609469] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:42.548 [2024-07-23 01:46:55.609481] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:42.548 [2024-07-23 01:46:55.609720] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:42.548 [2024-07-23 01:46:55.609763] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c415a0 0 00:25:42.548 [2024-07-23 01:46:55.616629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:42.548 [2024-07-23 01:46:55.616648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:42.548 [2024-07-23 01:46:55.616656] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:42.548 [2024-07-23 01:46:55.616662] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:42.548 [2024-07-23 01:46:55.616698] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.616710] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.616717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.616731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:42.548 [2024-07-23 01:46:55.616756] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.624631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.624648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.624656] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.624662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.624680] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:42.548 [2024-07-23 01:46:55.624691] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:42.548 [2024-07-23 01:46:55.624704] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:42.548 [2024-07-23 01:46:55.624720] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.624729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.624735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.624746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.548 [2024-07-23 01:46:55.624768] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.624960] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.624973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.624980] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.624986] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.624996] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:42.548 [2024-07-23 01:46:55.625008] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:42.548 [2024-07-23 01:46:55.625021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625034] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.625044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.548 [2024-07-23 01:46:55.625065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.625218] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.625230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.625236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.625252] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:42.548 [2024-07-23 01:46:55.625265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.625277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625284] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625291] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.625301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.548 [2024-07-23 01:46:55.625321] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.625463] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.625477] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.625484] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.625500] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.625516] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625535] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.625546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.548 [2024-07-23 01:46:55.625566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.625746] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.625760] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.625767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625774] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.625783] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:42.548 [2024-07-23 01:46:55.625792] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.625805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.625916] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:42.548 [2024-07-23 01:46:55.625924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.625935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.625949] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.548 [2024-07-23 01:46:55.625959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.548 [2024-07-23 01:46:55.625979] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.548 [2024-07-23 01:46:55.626145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.548 [2024-07-23 01:46:55.626157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.548 [2024-07-23 01:46:55.626163] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.548 [2024-07-23 01:46:55.626170] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.548 [2024-07-23 01:46:55.626179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:42.548 [2024-07-23 01:46:55.626195] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.626220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.549 [2024-07-23 01:46:55.626240] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.549 [2024-07-23 01:46:55.626391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.549 [2024-07-23 01:46:55.626403] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.549 [2024-07-23 01:46:55.626409] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.549 [2024-07-23 01:46:55.626424] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:42.549 [2024-07-23 01:46:55.626432] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.626449] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:42.549 [2024-07-23 01:46:55.626466] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.626479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626487] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.626503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.549 [2024-07-23 01:46:55.626538] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.549 [2024-07-23 01:46:55.626753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.549 [2024-07-23 01:46:55.626769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.549 [2024-07-23 01:46:55.626776] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626783] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=4096, cccid=0 00:25:42.549 [2024-07-23 01:46:55.626790] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cac3e0) on tqpair(0x1c415a0): expected_datao=0, payload_size=4096 00:25:42.549 [2024-07-23 01:46:55.626832] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626841] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.549 [2024-07-23 01:46:55.626985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.549 [2024-07-23 01:46:55.626991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.626998] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.549 [2024-07-23 01:46:55.627009] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:42.549 [2024-07-23 01:46:55.627017] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:42.549 [2024-07-23 01:46:55.627024] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:42.549 [2024-07-23 01:46:55.627030] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:42.549 [2024-07-23 01:46:55.627038] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:42.549 [2024-07-23 01:46:55.627045] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627063] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627083] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:42.549 [2024-07-23 01:46:55.627120] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.549 [2024-07-23 01:46:55.627275] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.549 [2024-07-23 01:46:55.627290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.549 [2024-07-23 01:46:55.627296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627306] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac3e0) on tqpair=0x1c415a0 00:25:42.549 [2024-07-23 01:46:55.627318] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627332] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.549 [2024-07-23 01:46:55.627352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627358] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627364] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.549 [2024-07-23 01:46:55.627382] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627388] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.549 [2024-07-23 01:46:55.627412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627419] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.549 [2024-07-23 01:46:55.627456] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627474] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627493] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627499] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.549 [2024-07-23 01:46:55.627529] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac3e0, cid 0, qid 0 00:25:42.549 [2024-07-23 01:46:55.627555] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac540, cid 1, qid 0 00:25:42.549 [2024-07-23 01:46:55.627563] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac6a0, cid 2, qid 0 00:25:42.549 [2024-07-23 01:46:55.627571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.549 [2024-07-23 01:46:55.627579] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.549 [2024-07-23 01:46:55.627787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.549 [2024-07-23 01:46:55.627802] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.549 [2024-07-23 01:46:55.627809] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627815] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.549 [2024-07-23 01:46:55.627824] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:42.549 [2024-07-23 01:46:55.627833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627851] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627867] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.627879] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627887] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.627893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.627904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:42.549 [2024-07-23 01:46:55.627939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.549 [2024-07-23 01:46:55.628109] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.549 [2024-07-23 01:46:55.628122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.549 [2024-07-23 01:46:55.628128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.628135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.549 [2024-07-23 01:46:55.628198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.628215] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:42.549 [2024-07-23 01:46:55.628229] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.628236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.549 [2024-07-23 01:46:55.628243] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.549 [2024-07-23 01:46:55.628253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.550 [2024-07-23 01:46:55.628273] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.550 [2024-07-23 01:46:55.628443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.550 [2024-07-23 01:46:55.628455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.550 [2024-07-23 01:46:55.628461] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.550 [2024-07-23 01:46:55.628467] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=4096, cccid=4 00:25:42.550 [2024-07-23 01:46:55.628475] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cac960) on tqpair(0x1c415a0): expected_datao=0, payload_size=4096 00:25:42.550 [2024-07-23 01:46:55.628515] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.550 [2024-07-23 01:46:55.628524] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.811 [2024-07-23 01:46:55.668750] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.811 [2024-07-23 01:46:55.668770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.811 [2024-07-23 01:46:55.668778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.811 [2024-07-23 01:46:55.668784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.811 [2024-07-23 01:46:55.668805] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:42.811 [2024-07-23 01:46:55.668822] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:42.811 [2024-07-23 01:46:55.668840] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:42.811 [2024-07-23 01:46:55.668861] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.811 [2024-07-23 01:46:55.668870] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.811 [2024-07-23 01:46:55.668876] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.811 [2024-07-23 01:46:55.668887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.811 [2024-07-23 01:46:55.668926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.811 [2024-07-23 01:46:55.669084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.669099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.669106] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.669112] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=4096, cccid=4 00:25:42.812 [2024-07-23 01:46:55.669120] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cac960) on tqpair(0x1c415a0): expected_datao=0, payload_size=4096 00:25:42.812 [2024-07-23 01:46:55.669154] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.669163] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.709758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.709777] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.709785] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.709792] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.709814] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.709833] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.709847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.709855] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.709862] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.709873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.709896] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.812 [2024-07-23 01:46:55.710075] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.710088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.710094] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.710101] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=4096, cccid=4 00:25:42.812 [2024-07-23 01:46:55.710108] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cac960) on tqpair(0x1c415a0): expected_datao=0, payload_size=4096 00:25:42.812 [2024-07-23 01:46:55.710145] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.710154] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754643] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.754670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.754678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754684] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.754715] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754736] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754752] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754763] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754772] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754780] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:42.812 [2024-07-23 01:46:55.754788] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:42.812 [2024-07-23 01:46:55.754796] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:42.812 [2024-07-23 01:46:55.754815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754823] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754830] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.754841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.754852] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.754865] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.754875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:42.812 [2024-07-23 01:46:55.754901] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.812 [2024-07-23 01:46:55.754913] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacac0, cid 5, qid 0 00:25:42.812 [2024-07-23 01:46:55.755074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.755089] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.755096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.755114] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.755125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.755131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755137] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacac0) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.755155] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.755199] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.755219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacac0, cid 5, qid 0 00:25:42.812 [2024-07-23 01:46:55.755389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.755402] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.755409] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755419] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacac0) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.755436] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755452] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.755461] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.755481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacac0, cid 5, qid 0 00:25:42.812 [2024-07-23 01:46:55.755649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.755664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.755671] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755677] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacac0) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.755695] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755713] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.755724] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.755746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacac0, cid 5, qid 0 00:25:42.812 [2024-07-23 01:46:55.755945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.755958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.755964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.755971] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacac0) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.755990] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756000] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756006] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.756016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.756028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756041] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.756050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.756060] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.756082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.756093] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c415a0) 00:25:42.812 [2024-07-23 01:46:55.756130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.812 [2024-07-23 01:46:55.756156] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacac0, cid 5, qid 0 00:25:42.812 [2024-07-23 01:46:55.756167] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac960, cid 4, qid 0 00:25:42.812 [2024-07-23 01:46:55.756175] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacc20, cid 6, qid 0 00:25:42.812 [2024-07-23 01:46:55.756198] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacd80, cid 7, qid 0 00:25:42.812 [2024-07-23 01:46:55.756429] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.756442] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.756448] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=8192, cccid=5 00:25:42.812 [2024-07-23 01:46:55.756463] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cacac0) on tqpair(0x1c415a0): expected_datao=0, payload_size=8192 00:25:42.812 [2024-07-23 01:46:55.756525] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756535] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.756554] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.756561] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756566] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=512, cccid=4 00:25:42.812 [2024-07-23 01:46:55.756574] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cac960) on tqpair(0x1c415a0): expected_datao=0, payload_size=512 00:25:42.812 [2024-07-23 01:46:55.756583] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756591] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.756633] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.756640] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756646] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=512, cccid=6 00:25:42.812 [2024-07-23 01:46:55.756654] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cacc20) on tqpair(0x1c415a0): expected_datao=0, payload_size=512 00:25:42.812 [2024-07-23 01:46:55.756664] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756670] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756678] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:42.812 [2024-07-23 01:46:55.756687] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:42.812 [2024-07-23 01:46:55.756694] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756700] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c415a0): datao=0, datal=4096, cccid=7 00:25:42.812 [2024-07-23 01:46:55.756707] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cacd80) on tqpair(0x1c415a0): expected_datao=0, payload_size=4096 00:25:42.812 [2024-07-23 01:46:55.756718] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756725] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756736] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.756745] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.756752] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756759] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacac0) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.756782] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.756793] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.756801] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756808] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac960) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.756822] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.756833] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.756839] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756846] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacc20) on tqpair=0x1c415a0 00:25:42.812 [2024-07-23 01:46:55.756858] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.812 [2024-07-23 01:46:55.756869] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.812 [2024-07-23 01:46:55.756876] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.812 [2024-07-23 01:46:55.756882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacd80) on tqpair=0x1c415a0 00:25:42.812 ===================================================== 00:25:42.812 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.812 ===================================================== 00:25:42.812 Controller Capabilities/Features 00:25:42.812 ================================ 00:25:42.812 Vendor ID: 8086 00:25:42.812 Subsystem Vendor ID: 8086 00:25:42.812 Serial Number: SPDK00000000000001 00:25:42.812 Model Number: SPDK bdev Controller 00:25:42.812 Firmware Version: 24.01.1 00:25:42.812 Recommended Arb Burst: 6 00:25:42.812 IEEE OUI Identifier: e4 d2 5c 00:25:42.812 Multi-path I/O 00:25:42.812 May have multiple subsystem ports: Yes 00:25:42.812 May have multiple controllers: Yes 00:25:42.812 Associated with SR-IOV VF: No 00:25:42.812 Max Data Transfer Size: 131072 00:25:42.812 Max Number of Namespaces: 32 00:25:42.812 Max Number of I/O Queues: 127 00:25:42.812 NVMe Specification Version (VS): 1.3 00:25:42.812 NVMe Specification Version (Identify): 1.3 00:25:42.812 Maximum Queue Entries: 128 00:25:42.813 Contiguous Queues Required: Yes 00:25:42.813 Arbitration Mechanisms Supported 00:25:42.813 Weighted Round Robin: Not Supported 00:25:42.813 Vendor Specific: Not Supported 00:25:42.813 Reset Timeout: 15000 ms 00:25:42.813 Doorbell Stride: 4 bytes 00:25:42.813 NVM Subsystem Reset: Not Supported 00:25:42.813 Command Sets Supported 00:25:42.813 NVM Command Set: Supported 00:25:42.813 Boot Partition: Not Supported 00:25:42.813 Memory Page Size Minimum: 4096 bytes 00:25:42.813 Memory Page Size Maximum: 4096 bytes 00:25:42.813 Persistent Memory Region: Not Supported 00:25:42.813 Optional Asynchronous Events Supported 00:25:42.813 Namespace Attribute Notices: Supported 00:25:42.813 Firmware Activation Notices: Not Supported 00:25:42.813 ANA Change Notices: Not Supported 00:25:42.813 PLE Aggregate Log Change Notices: Not Supported 00:25:42.813 LBA Status Info Alert Notices: Not Supported 00:25:42.813 EGE Aggregate Log Change Notices: Not Supported 00:25:42.813 Normal NVM Subsystem Shutdown event: Not Supported 00:25:42.813 Zone Descriptor Change Notices: Not Supported 00:25:42.813 Discovery Log Change Notices: Not Supported 00:25:42.813 Controller Attributes 00:25:42.813 128-bit Host Identifier: Supported 00:25:42.813 Non-Operational Permissive Mode: Not Supported 00:25:42.813 NVM Sets: Not Supported 00:25:42.813 Read Recovery Levels: Not Supported 00:25:42.813 Endurance Groups: Not Supported 00:25:42.813 Predictable Latency Mode: Not Supported 00:25:42.813 Traffic Based Keep ALive: Not Supported 00:25:42.813 Namespace Granularity: Not Supported 00:25:42.813 SQ Associations: Not Supported 00:25:42.813 UUID List: Not Supported 00:25:42.813 Multi-Domain Subsystem: Not Supported 00:25:42.813 Fixed Capacity Management: Not Supported 00:25:42.813 Variable Capacity Management: Not Supported 00:25:42.813 Delete Endurance Group: Not Supported 00:25:42.813 Delete NVM Set: Not Supported 00:25:42.813 Extended LBA Formats Supported: Not Supported 00:25:42.813 Flexible Data Placement Supported: Not Supported 00:25:42.813 00:25:42.813 Controller Memory Buffer Support 00:25:42.813 ================================ 00:25:42.813 Supported: No 00:25:42.813 00:25:42.813 Persistent Memory Region Support 00:25:42.813 ================================ 00:25:42.813 Supported: No 00:25:42.813 00:25:42.813 Admin Command Set Attributes 00:25:42.813 ============================ 00:25:42.813 Security Send/Receive: Not Supported 00:25:42.813 Format NVM: Not Supported 00:25:42.813 Firmware Activate/Download: Not Supported 00:25:42.813 Namespace Management: Not Supported 00:25:42.813 Device Self-Test: Not Supported 00:25:42.813 Directives: Not Supported 00:25:42.813 NVMe-MI: Not Supported 00:25:42.813 Virtualization Management: Not Supported 00:25:42.813 Doorbell Buffer Config: Not Supported 00:25:42.813 Get LBA Status Capability: Not Supported 00:25:42.813 Command & Feature Lockdown Capability: Not Supported 00:25:42.813 Abort Command Limit: 4 00:25:42.813 Async Event Request Limit: 4 00:25:42.813 Number of Firmware Slots: N/A 00:25:42.813 Firmware Slot 1 Read-Only: N/A 00:25:42.813 Firmware Activation Without Reset: N/A 00:25:42.813 Multiple Update Detection Support: N/A 00:25:42.813 Firmware Update Granularity: No Information Provided 00:25:42.813 Per-Namespace SMART Log: No 00:25:42.813 Asymmetric Namespace Access Log Page: Not Supported 00:25:42.813 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:42.813 Command Effects Log Page: Supported 00:25:42.813 Get Log Page Extended Data: Supported 00:25:42.813 Telemetry Log Pages: Not Supported 00:25:42.813 Persistent Event Log Pages: Not Supported 00:25:42.813 Supported Log Pages Log Page: May Support 00:25:42.813 Commands Supported & Effects Log Page: Not Supported 00:25:42.813 Feature Identifiers & Effects Log Page:May Support 00:25:42.813 NVMe-MI Commands & Effects Log Page: May Support 00:25:42.813 Data Area 4 for Telemetry Log: Not Supported 00:25:42.813 Error Log Page Entries Supported: 128 00:25:42.813 Keep Alive: Supported 00:25:42.813 Keep Alive Granularity: 10000 ms 00:25:42.813 00:25:42.813 NVM Command Set Attributes 00:25:42.813 ========================== 00:25:42.813 Submission Queue Entry Size 00:25:42.813 Max: 64 00:25:42.813 Min: 64 00:25:42.813 Completion Queue Entry Size 00:25:42.813 Max: 16 00:25:42.813 Min: 16 00:25:42.813 Number of Namespaces: 32 00:25:42.813 Compare Command: Supported 00:25:42.813 Write Uncorrectable Command: Not Supported 00:25:42.813 Dataset Management Command: Supported 00:25:42.813 Write Zeroes Command: Supported 00:25:42.813 Set Features Save Field: Not Supported 00:25:42.813 Reservations: Supported 00:25:42.813 Timestamp: Not Supported 00:25:42.813 Copy: Supported 00:25:42.813 Volatile Write Cache: Present 00:25:42.813 Atomic Write Unit (Normal): 1 00:25:42.813 Atomic Write Unit (PFail): 1 00:25:42.813 Atomic Compare & Write Unit: 1 00:25:42.813 Fused Compare & Write: Supported 00:25:42.813 Scatter-Gather List 00:25:42.813 SGL Command Set: Supported 00:25:42.813 SGL Keyed: Supported 00:25:42.813 SGL Bit Bucket Descriptor: Not Supported 00:25:42.813 SGL Metadata Pointer: Not Supported 00:25:42.813 Oversized SGL: Not Supported 00:25:42.813 SGL Metadata Address: Not Supported 00:25:42.813 SGL Offset: Supported 00:25:42.813 Transport SGL Data Block: Not Supported 00:25:42.813 Replay Protected Memory Block: Not Supported 00:25:42.813 00:25:42.813 Firmware Slot Information 00:25:42.813 ========================= 00:25:42.813 Active slot: 1 00:25:42.813 Slot 1 Firmware Revision: 24.01.1 00:25:42.813 00:25:42.813 00:25:42.813 Commands Supported and Effects 00:25:42.813 ============================== 00:25:42.813 Admin Commands 00:25:42.813 -------------- 00:25:42.813 Get Log Page (02h): Supported 00:25:42.813 Identify (06h): Supported 00:25:42.813 Abort (08h): Supported 00:25:42.813 Set Features (09h): Supported 00:25:42.813 Get Features (0Ah): Supported 00:25:42.813 Asynchronous Event Request (0Ch): Supported 00:25:42.813 Keep Alive (18h): Supported 00:25:42.813 I/O Commands 00:25:42.813 ------------ 00:25:42.813 Flush (00h): Supported LBA-Change 00:25:42.813 Write (01h): Supported LBA-Change 00:25:42.813 Read (02h): Supported 00:25:42.813 Compare (05h): Supported 00:25:42.813 Write Zeroes (08h): Supported LBA-Change 00:25:42.813 Dataset Management (09h): Supported LBA-Change 00:25:42.813 Copy (19h): Supported LBA-Change 00:25:42.813 Unknown (79h): Supported LBA-Change 00:25:42.813 Unknown (7Ah): Supported 00:25:42.813 00:25:42.813 Error Log 00:25:42.813 ========= 00:25:42.813 00:25:42.813 Arbitration 00:25:42.813 =========== 00:25:42.813 Arbitration Burst: 1 00:25:42.813 00:25:42.813 Power Management 00:25:42.813 ================ 00:25:42.813 Number of Power States: 1 00:25:42.813 Current Power State: Power State #0 00:25:42.813 Power State #0: 00:25:42.813 Max Power: 0.00 W 00:25:42.813 Non-Operational State: Operational 00:25:42.813 Entry Latency: Not Reported 00:25:42.813 Exit Latency: Not Reported 00:25:42.813 Relative Read Throughput: 0 00:25:42.813 Relative Read Latency: 0 00:25:42.813 Relative Write Throughput: 0 00:25:42.813 Relative Write Latency: 0 00:25:42.813 Idle Power: Not Reported 00:25:42.813 Active Power: Not Reported 00:25:42.813 Non-Operational Permissive Mode: Not Supported 00:25:42.813 00:25:42.813 Health Information 00:25:42.813 ================== 00:25:42.813 Critical Warnings: 00:25:42.813 Available Spare Space: OK 00:25:42.813 Temperature: OK 00:25:42.813 Device Reliability: OK 00:25:42.813 Read Only: No 00:25:42.813 Volatile Memory Backup: OK 00:25:42.813 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:42.813 Temperature Threshold: [2024-07-23 01:46:55.757012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757023] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.757040] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.757061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cacd80, cid 7, qid 0 00:25:42.813 [2024-07-23 01:46:55.757238] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.757250] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.757257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757263] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cacd80) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.757301] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:42.813 [2024-07-23 01:46:55.757322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.813 [2024-07-23 01:46:55.757334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.813 [2024-07-23 01:46:55.757344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.813 [2024-07-23 01:46:55.757354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:42.813 [2024-07-23 01:46:55.757380] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757395] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.757405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.757425] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.757607] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.757629] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.757637] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757644] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.757657] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757668] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757675] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.757686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.757713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.757879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.757894] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.757901] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757908] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.757917] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:42.813 [2024-07-23 01:46:55.757925] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:42.813 [2024-07-23 01:46:55.757957] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757966] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.757973] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.757983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.758017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.758181] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.758196] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.758202] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758209] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.758226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758236] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758242] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.758252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.758272] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.758411] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.758426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.758433] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758440] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.758457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758467] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.758473] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.758483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.758503] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.762626] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.762645] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.762653] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.762664] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.762684] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.762694] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.762700] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c415a0) 00:25:42.813 [2024-07-23 01:46:55.762711] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:42.813 [2024-07-23 01:46:55.762733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cac800, cid 3, qid 0 00:25:42.813 [2024-07-23 01:46:55.762890] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:42.813 [2024-07-23 01:46:55.762902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:42.813 [2024-07-23 01:46:55.762909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:42.813 [2024-07-23 01:46:55.762916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1cac800) on tqpair=0x1c415a0 00:25:42.813 [2024-07-23 01:46:55.762945] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:25:42.813 0 Kelvin (-273 Celsius) 00:25:42.813 Available Spare: 0% 00:25:42.813 Available Spare Threshold: 0% 00:25:42.813 Life Percentage Used: 0% 00:25:42.813 Data Units Read: 0 00:25:42.813 Data Units Written: 0 00:25:42.813 Host Read Commands: 0 00:25:42.813 Host Write Commands: 0 00:25:42.813 Controller Busy Time: 0 minutes 00:25:42.813 Power Cycles: 0 00:25:42.813 Power On Hours: 0 hours 00:25:42.813 Unsafe Shutdowns: 0 00:25:42.813 Unrecoverable Media Errors: 0 00:25:42.813 Lifetime Error Log Entries: 0 00:25:42.813 Warning Temperature Time: 0 minutes 00:25:42.813 Critical Temperature Time: 0 minutes 00:25:42.813 00:25:42.813 Number of Queues 00:25:42.813 ================ 00:25:42.813 Number of I/O Submission Queues: 127 00:25:42.813 Number of I/O Completion Queues: 127 00:25:42.813 00:25:42.813 Active Namespaces 00:25:42.813 ================= 00:25:42.813 Namespace ID:1 00:25:42.813 Error Recovery Timeout: Unlimited 00:25:42.813 Command Set Identifier: NVM (00h) 00:25:42.813 Deallocate: Supported 00:25:42.813 Deallocated/Unwritten Error: Not Supported 00:25:42.813 Deallocated Read Value: Unknown 00:25:42.813 Deallocate in Write Zeroes: Not Supported 00:25:42.813 Deallocated Guard Field: 0xFFFF 00:25:42.813 Flush: Supported 00:25:42.813 Reservation: Supported 00:25:42.814 Namespace Sharing Capabilities: Multiple Controllers 00:25:42.814 Size (in LBAs): 131072 (0GiB) 00:25:42.814 Capacity (in LBAs): 131072 (0GiB) 00:25:42.814 Utilization (in LBAs): 131072 (0GiB) 00:25:42.814 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:42.814 EUI64: ABCDEF0123456789 00:25:42.814 UUID: eb097cfd-d3b1-4823-b71f-e39969d2825c 00:25:42.814 Thin Provisioning: Not Supported 00:25:42.814 Per-NS Atomic Units: Yes 00:25:42.814 Atomic Boundary Size (Normal): 0 00:25:42.814 Atomic Boundary Size (PFail): 0 00:25:42.814 Atomic Boundary Offset: 0 00:25:42.814 Maximum Single Source Range Length: 65535 00:25:42.814 Maximum Copy Length: 65535 00:25:42.814 Maximum Source Range Count: 1 00:25:42.814 NGUID/EUI64 Never Reused: No 00:25:42.814 Namespace Write Protected: No 00:25:42.814 Number of LBA Formats: 1 00:25:42.814 Current LBA Format: LBA Format #00 00:25:42.814 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:42.814 00:25:42.814 01:46:55 -- host/identify.sh@51 -- # sync 00:25:42.814 01:46:55 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:42.814 01:46:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.814 01:46:55 -- common/autotest_common.sh@10 -- # set +x 00:25:42.814 01:46:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.814 01:46:55 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:42.814 01:46:55 -- host/identify.sh@56 -- # nvmftestfini 00:25:42.814 01:46:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:42.814 01:46:55 -- nvmf/common.sh@116 -- # sync 00:25:42.814 01:46:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:42.814 01:46:55 -- nvmf/common.sh@119 -- # set +e 00:25:42.814 01:46:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:42.814 01:46:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:42.814 rmmod nvme_tcp 00:25:42.814 rmmod nvme_fabrics 00:25:42.814 rmmod nvme_keyring 00:25:42.814 01:46:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:42.814 01:46:55 -- nvmf/common.sh@123 -- # set -e 00:25:42.814 01:46:55 -- nvmf/common.sh@124 -- # return 0 00:25:42.814 01:46:55 -- nvmf/common.sh@477 -- # '[' -n 3862298 ']' 00:25:42.814 01:46:55 -- nvmf/common.sh@478 -- # killprocess 3862298 00:25:42.814 01:46:55 -- common/autotest_common.sh@926 -- # '[' -z 3862298 ']' 00:25:42.814 01:46:55 -- common/autotest_common.sh@930 -- # kill -0 3862298 00:25:42.814 01:46:55 -- common/autotest_common.sh@931 -- # uname 00:25:42.814 01:46:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:42.814 01:46:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3862298 00:25:42.814 01:46:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:42.814 01:46:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:42.814 01:46:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3862298' 00:25:42.814 killing process with pid 3862298 00:25:42.814 01:46:55 -- common/autotest_common.sh@945 -- # kill 3862298 00:25:42.814 [2024-07-23 01:46:55.864463] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:42.814 01:46:55 -- common/autotest_common.sh@950 -- # wait 3862298 00:25:43.074 01:46:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:43.074 01:46:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:43.074 01:46:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:43.074 01:46:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.074 01:46:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:43.074 01:46:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.074 01:46:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:43.074 01:46:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.664 01:46:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:45.664 00:25:45.664 real 0m6.093s 00:25:45.664 user 0m7.690s 00:25:45.664 sys 0m1.887s 00:25:45.664 01:46:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.664 01:46:58 -- common/autotest_common.sh@10 -- # set +x 00:25:45.664 ************************************ 00:25:45.664 END TEST nvmf_identify 00:25:45.664 ************************************ 00:25:45.664 01:46:58 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:45.664 01:46:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:45.664 01:46:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:45.664 01:46:58 -- common/autotest_common.sh@10 -- # set +x 00:25:45.664 ************************************ 00:25:45.664 START TEST nvmf_perf 00:25:45.664 ************************************ 00:25:45.664 01:46:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:45.664 * Looking for test storage... 00:25:45.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:45.664 01:46:58 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.664 01:46:58 -- nvmf/common.sh@7 -- # uname -s 00:25:45.664 01:46:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.664 01:46:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.664 01:46:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.664 01:46:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.664 01:46:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.664 01:46:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.664 01:46:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.664 01:46:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.664 01:46:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.664 01:46:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.664 01:46:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.664 01:46:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.664 01:46:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.664 01:46:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.664 01:46:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.664 01:46:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.664 01:46:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.664 01:46:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.664 01:46:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.664 01:46:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.664 01:46:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.664 01:46:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.664 01:46:58 -- paths/export.sh@5 -- # export PATH 00:25:45.664 01:46:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.664 01:46:58 -- nvmf/common.sh@46 -- # : 0 00:25:45.664 01:46:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:45.664 01:46:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:45.664 01:46:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:45.664 01:46:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.664 01:46:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.664 01:46:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:45.664 01:46:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:45.664 01:46:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:45.664 01:46:58 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:45.664 01:46:58 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:45.664 01:46:58 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:45.664 01:46:58 -- host/perf.sh@17 -- # nvmftestinit 00:25:45.664 01:46:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:45.664 01:46:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.664 01:46:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:45.664 01:46:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:45.664 01:46:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:45.664 01:46:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.664 01:46:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.664 01:46:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.665 01:46:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:45.665 01:46:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:45.665 01:46:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:45.665 01:46:58 -- common/autotest_common.sh@10 -- # set +x 00:25:47.043 01:47:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:47.043 01:47:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:47.043 01:47:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:47.043 01:47:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:47.043 01:47:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:47.043 01:47:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:47.043 01:47:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:47.043 01:47:00 -- nvmf/common.sh@294 -- # net_devs=() 00:25:47.043 01:47:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:47.043 01:47:00 -- nvmf/common.sh@295 -- # e810=() 00:25:47.043 01:47:00 -- nvmf/common.sh@295 -- # local -ga e810 00:25:47.043 01:47:00 -- nvmf/common.sh@296 -- # x722=() 00:25:47.043 01:47:00 -- nvmf/common.sh@296 -- # local -ga x722 00:25:47.043 01:47:00 -- nvmf/common.sh@297 -- # mlx=() 00:25:47.043 01:47:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:47.043 01:47:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.043 01:47:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:47.043 01:47:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:47.043 01:47:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.043 01:47:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:47.043 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:47.043 01:47:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:47.043 01:47:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:47.043 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:47.043 01:47:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.043 01:47:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.043 01:47:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.043 01:47:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:47.043 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:47.043 01:47:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.043 01:47:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:47.043 01:47:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.043 01:47:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.043 01:47:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:47.043 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:47.043 01:47:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.043 01:47:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:47.043 01:47:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:47.043 01:47:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:47.043 01:47:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.043 01:47:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.043 01:47:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.043 01:47:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:47.043 01:47:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.043 01:47:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.043 01:47:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:47.043 01:47:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.043 01:47:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.043 01:47:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:47.043 01:47:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:47.043 01:47:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.043 01:47:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.043 01:47:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.043 01:47:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.043 01:47:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:47.043 01:47:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.301 01:47:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.301 01:47:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.301 01:47:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:47.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:25:47.301 00:25:47.301 --- 10.0.0.2 ping statistics --- 00:25:47.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.301 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:47.301 01:47:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:25:47.301 00:25:47.301 --- 10.0.0.1 ping statistics --- 00:25:47.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.301 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:25:47.301 01:47:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.301 01:47:00 -- nvmf/common.sh@410 -- # return 0 00:25:47.301 01:47:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:47.301 01:47:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.301 01:47:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:47.301 01:47:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:47.301 01:47:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.301 01:47:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:47.301 01:47:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:47.301 01:47:00 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:47.301 01:47:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:47.301 01:47:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:47.301 01:47:00 -- common/autotest_common.sh@10 -- # set +x 00:25:47.301 01:47:00 -- nvmf/common.sh@469 -- # nvmfpid=3864411 00:25:47.302 01:47:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:47.302 01:47:00 -- nvmf/common.sh@470 -- # waitforlisten 3864411 00:25:47.302 01:47:00 -- common/autotest_common.sh@819 -- # '[' -z 3864411 ']' 00:25:47.302 01:47:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.302 01:47:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:47.302 01:47:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.302 01:47:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:47.302 01:47:00 -- common/autotest_common.sh@10 -- # set +x 00:25:47.302 [2024-07-23 01:47:00.259868] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:25:47.302 [2024-07-23 01:47:00.259951] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.302 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.302 [2024-07-23 01:47:00.323321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.560 [2024-07-23 01:47:00.414784] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:47.560 [2024-07-23 01:47:00.414932] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.560 [2024-07-23 01:47:00.414950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.560 [2024-07-23 01:47:00.414963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.560 [2024-07-23 01:47:00.415029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.561 [2024-07-23 01:47:00.415187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.561 [2024-07-23 01:47:00.415271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.561 [2024-07-23 01:47:00.415274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.130 01:47:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:48.130 01:47:01 -- common/autotest_common.sh@852 -- # return 0 00:25:48.130 01:47:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:48.130 01:47:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:48.130 01:47:01 -- common/autotest_common.sh@10 -- # set +x 00:25:48.388 01:47:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.388 01:47:01 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:48.388 01:47:01 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:51.668 01:47:04 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:51.668 01:47:04 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:51.668 01:47:04 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:51.668 01:47:04 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:51.926 01:47:04 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:51.926 01:47:04 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:51.926 01:47:04 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:51.926 01:47:04 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:51.926 01:47:04 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:52.183 [2024-07-23 01:47:05.068671] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:52.183 01:47:05 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:52.440 01:47:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:52.440 01:47:05 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:52.697 01:47:05 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:52.697 01:47:05 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:52.955 01:47:05 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:52.955 [2024-07-23 01:47:06.012255] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:52.955 01:47:06 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:53.212 01:47:06 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:53.212 01:47:06 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:53.212 01:47:06 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:53.212 01:47:06 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:54.589 Initializing NVMe Controllers 00:25:54.589 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:54.589 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:54.589 Initialization complete. Launching workers. 00:25:54.589 ======================================================== 00:25:54.589 Latency(us) 00:25:54.589 Device Information : IOPS MiB/s Average min max 00:25:54.589 PCIE (0000:88:00.0) NSID 1 from core 0: 86408.53 337.53 369.69 28.44 6280.25 00:25:54.589 ======================================================== 00:25:54.589 Total : 86408.53 337.53 369.69 28.44 6280.25 00:25:54.589 00:25:54.589 01:47:07 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:54.589 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.964 Initializing NVMe Controllers 00:25:55.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:55.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:55.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:55.964 Initialization complete. Launching workers. 00:25:55.964 ======================================================== 00:25:55.964 Latency(us) 00:25:55.964 Device Information : IOPS MiB/s Average min max 00:25:55.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.72 0.30 12893.19 243.29 45051.97 00:25:55.964 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.85 0.16 24083.06 5975.06 47910.13 00:25:55.964 ======================================================== 00:25:55.964 Total : 119.57 0.47 16809.64 243.29 47910.13 00:25:55.964 00:25:55.964 01:47:08 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:55.964 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.346 Initializing NVMe Controllers 00:25:57.346 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:57.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:57.346 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:57.346 Initialization complete. Launching workers. 00:25:57.346 ======================================================== 00:25:57.346 Latency(us) 00:25:57.346 Device Information : IOPS MiB/s Average min max 00:25:57.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8455.99 33.03 3798.15 497.71 8284.12 00:25:57.346 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3981.00 15.55 8074.16 6364.43 15757.70 00:25:57.346 ======================================================== 00:25:57.346 Total : 12436.99 48.58 5166.87 497.71 15757.70 00:25:57.346 00:25:57.346 01:47:10 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:57.346 01:47:10 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:57.346 01:47:10 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:57.346 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.881 Initializing NVMe Controllers 00:25:59.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.881 Controller IO queue size 128, less than required. 00:25:59.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.881 Controller IO queue size 128, less than required. 00:25:59.881 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:59.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:59.881 Initialization complete. Launching workers. 00:25:59.881 ======================================================== 00:25:59.881 Latency(us) 00:25:59.881 Device Information : IOPS MiB/s Average min max 00:25:59.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 950.95 237.74 137553.44 66733.38 220891.68 00:25:59.881 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 565.08 141.27 232635.88 128063.40 360932.37 00:25:59.881 ======================================================== 00:25:59.881 Total : 1516.03 379.01 172994.11 66733.38 360932.37 00:25:59.881 00:25:59.881 01:47:12 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:59.881 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.141 No valid NVMe controllers or AIO or URING devices found 00:26:00.141 Initializing NVMe Controllers 00:26:00.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.141 Controller IO queue size 128, less than required. 00:26:00.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.141 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:00.141 Controller IO queue size 128, less than required. 00:26:00.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:00.141 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:00.141 WARNING: Some requested NVMe devices were skipped 00:26:00.141 01:47:13 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:00.141 EAL: No free 2048 kB hugepages reported on node 1 00:26:02.682 Initializing NVMe Controllers 00:26:02.682 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:02.682 Controller IO queue size 128, less than required. 00:26:02.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:02.682 Controller IO queue size 128, less than required. 00:26:02.682 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:02.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:02.682 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:02.682 Initialization complete. Launching workers. 00:26:02.682 00:26:02.682 ==================== 00:26:02.682 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:02.682 TCP transport: 00:26:02.682 polls: 26062 00:26:02.682 idle_polls: 8275 00:26:02.682 sock_completions: 17787 00:26:02.682 nvme_completions: 3630 00:26:02.682 submitted_requests: 5624 00:26:02.682 queued_requests: 1 00:26:02.682 00:26:02.682 ==================== 00:26:02.682 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:02.682 TCP transport: 00:26:02.682 polls: 26343 00:26:02.682 idle_polls: 9443 00:26:02.682 sock_completions: 16900 00:26:02.682 nvme_completions: 3953 00:26:02.682 submitted_requests: 6141 00:26:02.682 queued_requests: 1 00:26:02.682 ======================================================== 00:26:02.682 Latency(us) 00:26:02.682 Device Information : IOPS MiB/s Average min max 00:26:02.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 969.61 242.40 135444.05 81456.70 196741.27 00:26:02.682 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1050.50 262.62 123823.87 54955.80 181118.72 00:26:02.682 ======================================================== 00:26:02.682 Total : 2020.11 505.03 129401.33 54955.80 196741.27 00:26:02.682 00:26:02.682 01:47:15 -- host/perf.sh@66 -- # sync 00:26:02.682 01:47:15 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:02.941 01:47:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:02.941 01:47:15 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:26:02.941 01:47:15 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:06.268 01:47:19 -- host/perf.sh@72 -- # ls_guid=33ff3bbe-0cbc-445a-a925-d2a6a5f45026 00:26:06.268 01:47:19 -- host/perf.sh@73 -- # get_lvs_free_mb 33ff3bbe-0cbc-445a-a925-d2a6a5f45026 00:26:06.268 01:47:19 -- common/autotest_common.sh@1343 -- # local lvs_uuid=33ff3bbe-0cbc-445a-a925-d2a6a5f45026 00:26:06.268 01:47:19 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:06.268 01:47:19 -- common/autotest_common.sh@1345 -- # local fc 00:26:06.268 01:47:19 -- common/autotest_common.sh@1346 -- # local cs 00:26:06.268 01:47:19 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:06.526 01:47:19 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:06.526 { 00:26:06.526 "uuid": "33ff3bbe-0cbc-445a-a925-d2a6a5f45026", 00:26:06.526 "name": "lvs_0", 00:26:06.526 "base_bdev": "Nvme0n1", 00:26:06.526 "total_data_clusters": 238234, 00:26:06.526 "free_clusters": 238234, 00:26:06.526 "block_size": 512, 00:26:06.526 "cluster_size": 4194304 00:26:06.526 } 00:26:06.526 ]' 00:26:06.526 01:47:19 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="33ff3bbe-0cbc-445a-a925-d2a6a5f45026") .free_clusters' 00:26:06.784 01:47:19 -- common/autotest_common.sh@1348 -- # fc=238234 00:26:06.784 01:47:19 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="33ff3bbe-0cbc-445a-a925-d2a6a5f45026") .cluster_size' 00:26:06.784 01:47:19 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:06.784 01:47:19 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:26:06.784 01:47:19 -- common/autotest_common.sh@1353 -- # echo 952936 00:26:06.784 952936 00:26:06.784 01:47:19 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:06.784 01:47:19 -- host/perf.sh@78 -- # free_mb=20480 00:26:06.784 01:47:19 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 33ff3bbe-0cbc-445a-a925-d2a6a5f45026 lbd_0 20480 00:26:07.042 01:47:20 -- host/perf.sh@80 -- # lb_guid=102162fb-97ad-4a92-9bbc-25c23b7d9133 00:26:07.042 01:47:20 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 102162fb-97ad-4a92-9bbc-25c23b7d9133 lvs_n_0 00:26:07.978 01:47:20 -- host/perf.sh@83 -- # ls_nested_guid=543ff337-2d9c-4483-87b1-bbcdfc7e18e2 00:26:07.978 01:47:20 -- host/perf.sh@84 -- # get_lvs_free_mb 543ff337-2d9c-4483-87b1-bbcdfc7e18e2 00:26:07.978 01:47:20 -- common/autotest_common.sh@1343 -- # local lvs_uuid=543ff337-2d9c-4483-87b1-bbcdfc7e18e2 00:26:07.978 01:47:20 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:07.978 01:47:20 -- common/autotest_common.sh@1345 -- # local fc 00:26:07.978 01:47:20 -- common/autotest_common.sh@1346 -- # local cs 00:26:07.978 01:47:20 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:07.978 01:47:21 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:07.978 { 00:26:07.978 "uuid": "33ff3bbe-0cbc-445a-a925-d2a6a5f45026", 00:26:07.978 "name": "lvs_0", 00:26:07.978 "base_bdev": "Nvme0n1", 00:26:07.978 "total_data_clusters": 238234, 00:26:07.978 "free_clusters": 233114, 00:26:07.978 "block_size": 512, 00:26:07.978 "cluster_size": 4194304 00:26:07.978 }, 00:26:07.978 { 00:26:07.978 "uuid": "543ff337-2d9c-4483-87b1-bbcdfc7e18e2", 00:26:07.978 "name": "lvs_n_0", 00:26:07.978 "base_bdev": "102162fb-97ad-4a92-9bbc-25c23b7d9133", 00:26:07.978 "total_data_clusters": 5114, 00:26:07.978 "free_clusters": 5114, 00:26:07.978 "block_size": 512, 00:26:07.978 "cluster_size": 4194304 00:26:07.978 } 00:26:07.978 ]' 00:26:07.978 01:47:21 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="543ff337-2d9c-4483-87b1-bbcdfc7e18e2") .free_clusters' 00:26:08.236 01:47:21 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:08.236 01:47:21 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="543ff337-2d9c-4483-87b1-bbcdfc7e18e2") .cluster_size' 00:26:08.236 01:47:21 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:08.236 01:47:21 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:08.236 01:47:21 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:08.236 20456 00:26:08.236 01:47:21 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:08.236 01:47:21 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 543ff337-2d9c-4483-87b1-bbcdfc7e18e2 lbd_nest_0 20456 00:26:08.495 01:47:21 -- host/perf.sh@88 -- # lb_nested_guid=8b4c6b7f-3099-4374-b663-ecd64c609f61 00:26:08.495 01:47:21 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.495 01:47:21 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:08.495 01:47:21 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8b4c6b7f-3099-4374-b663-ecd64c609f61 00:26:08.753 01:47:21 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.011 01:47:22 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:09.011 01:47:22 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:09.011 01:47:22 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:09.011 01:47:22 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:09.011 01:47:22 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:09.269 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.484 Initializing NVMe Controllers 00:26:21.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:21.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:21.484 Initialization complete. Launching workers. 00:26:21.484 ======================================================== 00:26:21.484 Latency(us) 00:26:21.484 Device Information : IOPS MiB/s Average min max 00:26:21.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.59 0.02 21939.61 222.49 47883.27 00:26:21.484 ======================================================== 00:26:21.485 Total : 45.59 0.02 21939.61 222.49 47883.27 00:26:21.485 00:26:21.485 01:47:32 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:21.485 01:47:32 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:21.485 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.460 Initializing NVMe Controllers 00:26:31.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:31.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:31.460 Initialization complete. Launching workers. 00:26:31.460 ======================================================== 00:26:31.460 Latency(us) 00:26:31.460 Device Information : IOPS MiB/s Average min max 00:26:31.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.20 9.65 12990.97 4971.53 51867.92 00:26:31.460 ======================================================== 00:26:31.460 Total : 77.20 9.65 12990.97 4971.53 51867.92 00:26:31.460 00:26:31.460 01:47:42 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:31.460 01:47:42 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:31.460 01:47:42 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:31.460 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.444 Initializing NVMe Controllers 00:26:41.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:41.444 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:41.444 Initialization complete. Launching workers. 00:26:41.444 ======================================================== 00:26:41.444 Latency(us) 00:26:41.444 Device Information : IOPS MiB/s Average min max 00:26:41.444 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7530.26 3.68 4249.25 311.69 12026.01 00:26:41.444 ======================================================== 00:26:41.444 Total : 7530.26 3.68 4249.25 311.69 12026.01 00:26:41.444 00:26:41.444 01:47:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:41.444 01:47:53 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:41.444 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.430 Initializing NVMe Controllers 00:26:51.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.431 Initialization complete. Launching workers. 00:26:51.431 ======================================================== 00:26:51.431 Latency(us) 00:26:51.431 Device Information : IOPS MiB/s Average min max 00:26:51.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1880.43 235.05 17034.94 1402.47 57340.95 00:26:51.431 ======================================================== 00:26:51.431 Total : 1880.43 235.05 17034.94 1402.47 57340.95 00:26:51.431 00:26:51.431 01:48:03 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:51.431 01:48:03 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:51.431 01:48:03 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:51.431 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.449 Initializing NVMe Controllers 00:27:01.449 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:01.449 Controller IO queue size 128, less than required. 00:27:01.449 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:01.449 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:01.449 Initialization complete. Launching workers. 00:27:01.449 ======================================================== 00:27:01.449 Latency(us) 00:27:01.449 Device Information : IOPS MiB/s Average min max 00:27:01.449 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11881.20 5.80 10777.12 1704.30 30823.67 00:27:01.449 ======================================================== 00:27:01.449 Total : 11881.20 5.80 10777.12 1704.30 30823.67 00:27:01.449 00:27:01.449 01:48:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:01.449 01:48:13 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:01.449 EAL: No free 2048 kB hugepages reported on node 1 00:27:11.436 Initializing NVMe Controllers 00:27:11.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:11.436 Controller IO queue size 128, less than required. 00:27:11.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:11.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:11.436 Initialization complete. Launching workers. 00:27:11.436 ======================================================== 00:27:11.436 Latency(us) 00:27:11.436 Device Information : IOPS MiB/s Average min max 00:27:11.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1215.45 151.93 105852.78 26791.69 216037.07 00:27:11.436 ======================================================== 00:27:11.436 Total : 1215.45 151.93 105852.78 26791.69 216037.07 00:27:11.436 00:27:11.436 01:48:24 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.695 01:48:24 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b4c6b7f-3099-4374-b663-ecd64c609f61 00:27:12.260 01:48:25 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:12.518 01:48:25 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 102162fb-97ad-4a92-9bbc-25c23b7d9133 00:27:12.776 01:48:25 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:13.034 01:48:26 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:13.034 01:48:26 -- host/perf.sh@114 -- # nvmftestfini 00:27:13.034 01:48:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:13.034 01:48:26 -- nvmf/common.sh@116 -- # sync 00:27:13.034 01:48:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:13.034 01:48:26 -- nvmf/common.sh@119 -- # set +e 00:27:13.034 01:48:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:13.034 01:48:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:13.034 rmmod nvme_tcp 00:27:13.034 rmmod nvme_fabrics 00:27:13.034 rmmod nvme_keyring 00:27:13.034 01:48:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:13.034 01:48:26 -- nvmf/common.sh@123 -- # set -e 00:27:13.034 01:48:26 -- nvmf/common.sh@124 -- # return 0 00:27:13.034 01:48:26 -- nvmf/common.sh@477 -- # '[' -n 3864411 ']' 00:27:13.034 01:48:26 -- nvmf/common.sh@478 -- # killprocess 3864411 00:27:13.034 01:48:26 -- common/autotest_common.sh@926 -- # '[' -z 3864411 ']' 00:27:13.034 01:48:26 -- common/autotest_common.sh@930 -- # kill -0 3864411 00:27:13.034 01:48:26 -- common/autotest_common.sh@931 -- # uname 00:27:13.034 01:48:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:13.034 01:48:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3864411 00:27:13.034 01:48:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:13.034 01:48:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:13.034 01:48:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3864411' 00:27:13.034 killing process with pid 3864411 00:27:13.034 01:48:26 -- common/autotest_common.sh@945 -- # kill 3864411 00:27:13.034 01:48:26 -- common/autotest_common.sh@950 -- # wait 3864411 00:27:14.938 01:48:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:14.938 01:48:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:14.938 01:48:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:14.938 01:48:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.938 01:48:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:14.938 01:48:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.938 01:48:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:14.938 01:48:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.843 01:48:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:16.843 00:27:16.843 real 1m31.555s 00:27:16.843 user 5m35.603s 00:27:16.843 sys 0m16.647s 00:27:16.843 01:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.843 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:27:16.843 ************************************ 00:27:16.843 END TEST nvmf_perf 00:27:16.843 ************************************ 00:27:16.843 01:48:29 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:16.843 01:48:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:16.843 01:48:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:16.843 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:27:16.843 ************************************ 00:27:16.843 START TEST nvmf_fio_host 00:27:16.843 ************************************ 00:27:16.843 01:48:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:16.843 * Looking for test storage... 00:27:16.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.843 01:48:29 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.843 01:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.843 01:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.843 01:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.843 01:48:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.843 01:48:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.843 01:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.843 01:48:29 -- paths/export.sh@5 -- # export PATH 00:27:16.843 01:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.843 01:48:29 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.843 01:48:29 -- nvmf/common.sh@7 -- # uname -s 00:27:16.843 01:48:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.843 01:48:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.843 01:48:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.843 01:48:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.843 01:48:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.843 01:48:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.843 01:48:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.843 01:48:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.843 01:48:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.843 01:48:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.843 01:48:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.843 01:48:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.843 01:48:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.843 01:48:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.843 01:48:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.843 01:48:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.843 01:48:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.843 01:48:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.843 01:48:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.844 01:48:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.844 01:48:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.844 01:48:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.844 01:48:29 -- paths/export.sh@5 -- # export PATH 00:27:16.844 01:48:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.844 01:48:29 -- nvmf/common.sh@46 -- # : 0 00:27:16.844 01:48:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:16.844 01:48:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:16.844 01:48:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:16.844 01:48:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.844 01:48:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.844 01:48:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:16.844 01:48:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:16.844 01:48:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:16.844 01:48:29 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:16.844 01:48:29 -- host/fio.sh@14 -- # nvmftestinit 00:27:16.844 01:48:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:16.844 01:48:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.844 01:48:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:16.844 01:48:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:16.844 01:48:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:16.844 01:48:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.844 01:48:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.844 01:48:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.844 01:48:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:16.844 01:48:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:16.844 01:48:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:16.844 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:27:18.745 01:48:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:18.745 01:48:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:18.745 01:48:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:18.745 01:48:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:18.745 01:48:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:18.745 01:48:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:18.745 01:48:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:18.745 01:48:31 -- nvmf/common.sh@294 -- # net_devs=() 00:27:18.745 01:48:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:18.745 01:48:31 -- nvmf/common.sh@295 -- # e810=() 00:27:18.745 01:48:31 -- nvmf/common.sh@295 -- # local -ga e810 00:27:18.745 01:48:31 -- nvmf/common.sh@296 -- # x722=() 00:27:18.745 01:48:31 -- nvmf/common.sh@296 -- # local -ga x722 00:27:18.745 01:48:31 -- nvmf/common.sh@297 -- # mlx=() 00:27:18.745 01:48:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:18.745 01:48:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.745 01:48:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:18.745 01:48:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:18.745 01:48:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:18.745 01:48:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.745 01:48:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.745 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.745 01:48:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:18.745 01:48:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.745 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.745 01:48:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:18.745 01:48:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.745 01:48:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.745 01:48:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.745 01:48:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.745 01:48:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.745 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.745 01:48:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.745 01:48:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:18.745 01:48:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.745 01:48:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:18.745 01:48:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.745 01:48:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.745 01:48:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.745 01:48:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:18.745 01:48:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:18.745 01:48:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:18.745 01:48:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:18.746 01:48:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.746 01:48:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.746 01:48:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.746 01:48:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:18.746 01:48:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.746 01:48:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.746 01:48:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:18.746 01:48:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.746 01:48:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.746 01:48:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:18.746 01:48:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:18.746 01:48:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.746 01:48:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.746 01:48:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.746 01:48:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.746 01:48:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:18.746 01:48:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.746 01:48:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.746 01:48:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.746 01:48:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:18.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:27:18.746 00:27:18.746 --- 10.0.0.2 ping statistics --- 00:27:18.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.746 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:27:18.746 01:48:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:27:18.746 00:27:18.746 --- 10.0.0.1 ping statistics --- 00:27:18.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.746 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:27:18.746 01:48:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.746 01:48:31 -- nvmf/common.sh@410 -- # return 0 00:27:18.746 01:48:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:18.746 01:48:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.746 01:48:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:18.746 01:48:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:18.746 01:48:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.746 01:48:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:18.746 01:48:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:19.004 01:48:31 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:19.004 01:48:31 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:19.004 01:48:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:19.004 01:48:31 -- common/autotest_common.sh@10 -- # set +x 00:27:19.004 01:48:31 -- host/fio.sh@24 -- # nvmfpid=3877479 00:27:19.004 01:48:31 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:19.004 01:48:31 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:19.004 01:48:31 -- host/fio.sh@28 -- # waitforlisten 3877479 00:27:19.004 01:48:31 -- common/autotest_common.sh@819 -- # '[' -z 3877479 ']' 00:27:19.004 01:48:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.004 01:48:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:19.004 01:48:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.004 01:48:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:19.004 01:48:31 -- common/autotest_common.sh@10 -- # set +x 00:27:19.004 [2024-07-23 01:48:31.900362] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:19.004 [2024-07-23 01:48:31.900445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:19.004 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.004 [2024-07-23 01:48:31.968527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:19.004 [2024-07-23 01:48:32.056146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:19.004 [2024-07-23 01:48:32.056300] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:19.004 [2024-07-23 01:48:32.056318] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:19.004 [2024-07-23 01:48:32.056331] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:19.004 [2024-07-23 01:48:32.056442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.004 [2024-07-23 01:48:32.056500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:19.004 [2024-07-23 01:48:32.056574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:19.004 [2024-07-23 01:48:32.056576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.939 01:48:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:19.939 01:48:32 -- common/autotest_common.sh@852 -- # return 0 00:27:19.939 01:48:32 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:20.197 [2024-07-23 01:48:33.164256] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.197 01:48:33 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:20.197 01:48:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:20.197 01:48:33 -- common/autotest_common.sh@10 -- # set +x 00:27:20.197 01:48:33 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:20.454 Malloc1 00:27:20.454 01:48:33 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:20.711 01:48:33 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:20.969 01:48:33 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.226 [2024-07-23 01:48:34.154278] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.226 01:48:34 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:21.483 01:48:34 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:21.483 01:48:34 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.483 01:48:34 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.483 01:48:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:21.483 01:48:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:21.483 01:48:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:21.483 01:48:34 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.483 01:48:34 -- common/autotest_common.sh@1320 -- # shift 00:27:21.483 01:48:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:21.483 01:48:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:21.484 01:48:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:21.484 01:48:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:21.484 01:48:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:21.484 01:48:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:21.484 01:48:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:21.484 01:48:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:21.742 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:21.742 fio-3.35 00:27:21.742 Starting 1 thread 00:27:21.742 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.269 00:27:24.269 test: (groupid=0, jobs=1): err= 0: pid=3877976: Tue Jul 23 01:48:36 2024 00:27:24.269 read: IOPS=9240, BW=36.1MiB/s (37.8MB/s)(72.4MiB/2006msec) 00:27:24.269 slat (nsec): min=1950, max=150337, avg=2480.03, stdev=1817.24 00:27:24.269 clat (usec): min=3045, max=13295, avg=7670.96, stdev=573.84 00:27:24.269 lat (usec): min=3079, max=13297, avg=7673.44, stdev=573.73 00:27:24.269 clat percentiles (usec): 00:27:24.269 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:27:24.269 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7767], 00:27:24.269 | 70.00th=[ 7963], 80.00th=[ 8094], 90.00th=[ 8356], 95.00th=[ 8586], 00:27:24.269 | 99.00th=[ 8979], 99.50th=[ 9110], 99.90th=[10552], 99.95th=[12518], 00:27:24.269 | 99.99th=[13173] 00:27:24.269 bw ( KiB/s): min=36072, max=37296, per=99.91%, avg=36928.00, stdev=577.00, samples=4 00:27:24.269 iops : min= 9018, max= 9324, avg=9232.00, stdev=144.25, samples=4 00:27:24.269 write: IOPS=9244, BW=36.1MiB/s (37.9MB/s)(72.4MiB/2006msec); 0 zone resets 00:27:24.269 slat (usec): min=2, max=142, avg= 2.61, stdev= 1.47 00:27:24.269 clat (usec): min=1412, max=11254, avg=6144.33, stdev=506.26 00:27:24.269 lat (usec): min=1421, max=11256, avg=6146.94, stdev=506.22 00:27:24.269 clat percentiles (usec): 00:27:24.269 | 1.00th=[ 5014], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:27:24.269 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:27:24.269 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6915], 00:27:24.269 | 99.00th=[ 7242], 99.50th=[ 7439], 99.90th=[ 9110], 99.95th=[10159], 00:27:24.269 | 99.99th=[11207] 00:27:24.269 bw ( KiB/s): min=36480, max=37312, per=100.00%, avg=36978.00, stdev=356.00, samples=4 00:27:24.269 iops : min= 9120, max= 9328, avg=9244.50, stdev=89.00, samples=4 00:27:24.269 lat (msec) : 2=0.02%, 4=0.09%, 10=99.79%, 20=0.11% 00:27:24.269 cpu : usr=55.41%, sys=38.50%, ctx=33, majf=0, minf=5 00:27:24.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:24.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:24.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:24.269 issued rwts: total=18536,18544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:24.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:24.269 00:27:24.269 Run status group 0 (all jobs): 00:27:24.269 READ: bw=36.1MiB/s (37.8MB/s), 36.1MiB/s-36.1MiB/s (37.8MB/s-37.8MB/s), io=72.4MiB (75.9MB), run=2006-2006msec 00:27:24.269 WRITE: bw=36.1MiB/s (37.9MB/s), 36.1MiB/s-36.1MiB/s (37.9MB/s-37.9MB/s), io=72.4MiB (76.0MB), run=2006-2006msec 00:27:24.269 01:48:36 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:24.269 01:48:36 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:24.269 01:48:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:24.269 01:48:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:24.269 01:48:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:24.269 01:48:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.269 01:48:36 -- common/autotest_common.sh@1320 -- # shift 00:27:24.269 01:48:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:24.269 01:48:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.269 01:48:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.269 01:48:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:24.269 01:48:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:24.269 01:48:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:24.269 01:48:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:24.269 01:48:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:24.269 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:24.269 fio-3.35 00:27:24.269 Starting 1 thread 00:27:24.269 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.855 00:27:26.855 test: (groupid=0, jobs=1): err= 0: pid=3878316: Tue Jul 23 01:48:39 2024 00:27:26.855 read: IOPS=8466, BW=132MiB/s (139MB/s)(266MiB/2007msec) 00:27:26.855 slat (nsec): min=2860, max=96098, avg=3698.38, stdev=1656.70 00:27:26.855 clat (usec): min=2517, max=54890, avg=9029.38, stdev=3648.87 00:27:26.855 lat (usec): min=2520, max=54894, avg=9033.08, stdev=3648.93 00:27:26.855 clat percentiles (usec): 00:27:26.855 | 1.00th=[ 4621], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 7046], 00:27:26.855 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:27:26.855 | 70.00th=[ 9765], 80.00th=[10290], 90.00th=[11469], 95.00th=[12780], 00:27:26.855 | 99.00th=[16057], 99.50th=[44303], 99.90th=[53740], 99.95th=[54264], 00:27:26.855 | 99.99th=[54789] 00:27:26.855 bw ( KiB/s): min=55296, max=76960, per=50.51%, avg=68424.00, stdev=10030.47, samples=4 00:27:26.855 iops : min= 3456, max= 4810, avg=4276.50, stdev=626.90, samples=4 00:27:26.855 write: IOPS=4955, BW=77.4MiB/s (81.2MB/s)(140MiB/1813msec); 0 zone resets 00:27:26.855 slat (usec): min=30, max=123, avg=33.39, stdev= 4.69 00:27:26.855 clat (usec): min=3526, max=57279, avg=10678.03, stdev=3395.28 00:27:26.855 lat (usec): min=3557, max=57315, avg=10711.42, stdev=3395.40 00:27:26.855 clat percentiles (usec): 00:27:26.855 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9110], 00:27:26.855 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[10814], 00:27:26.855 | 70.00th=[11207], 80.00th=[11731], 90.00th=[12780], 95.00th=[13829], 00:27:26.855 | 99.00th=[16057], 99.50th=[17695], 99.90th=[55837], 99.95th=[56361], 00:27:26.855 | 99.99th=[57410] 00:27:26.855 bw ( KiB/s): min=56640, max=79872, per=89.85%, avg=71240.00, stdev=10830.01, samples=4 00:27:26.855 iops : min= 3540, max= 4992, avg=4452.50, stdev=676.88, samples=4 00:27:26.855 lat (msec) : 4=0.28%, 10=62.71%, 20=36.51%, 50=0.18%, 100=0.30% 00:27:26.855 cpu : usr=74.33%, sys=22.23%, ctx=23, majf=0, minf=1 00:27:26.855 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:27:26.855 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.855 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:26.855 issued rwts: total=16992,8984,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.855 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:26.855 00:27:26.855 Run status group 0 (all jobs): 00:27:26.855 READ: bw=132MiB/s (139MB/s), 132MiB/s-132MiB/s (139MB/s-139MB/s), io=266MiB (278MB), run=2007-2007msec 00:27:26.855 WRITE: bw=77.4MiB/s (81.2MB/s), 77.4MiB/s-77.4MiB/s (81.2MB/s-81.2MB/s), io=140MiB (147MB), run=1813-1813msec 00:27:26.855 01:48:39 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.855 01:48:39 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:26.855 01:48:39 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:26.855 01:48:39 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:26.855 01:48:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:26.855 01:48:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:26.855 01:48:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:26.855 01:48:39 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:26.855 01:48:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:26.855 01:48:39 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:26.856 01:48:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:26.856 01:48:39 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:30.141 Nvme0n1 00:27:30.141 01:48:42 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:32.680 01:48:45 -- host/fio.sh@53 -- # ls_guid=5c186767-4cc3-40bd-8dbd-e6df1c4109d1 00:27:32.680 01:48:45 -- host/fio.sh@54 -- # get_lvs_free_mb 5c186767-4cc3-40bd-8dbd-e6df1c4109d1 00:27:32.680 01:48:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=5c186767-4cc3-40bd-8dbd-e6df1c4109d1 00:27:32.680 01:48:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:32.680 01:48:45 -- common/autotest_common.sh@1345 -- # local fc 00:27:32.680 01:48:45 -- common/autotest_common.sh@1346 -- # local cs 00:27:32.680 01:48:45 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:32.939 01:48:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:32.939 { 00:27:32.939 "uuid": "5c186767-4cc3-40bd-8dbd-e6df1c4109d1", 00:27:32.939 "name": "lvs_0", 00:27:32.939 "base_bdev": "Nvme0n1", 00:27:32.939 "total_data_clusters": 930, 00:27:32.939 "free_clusters": 930, 00:27:32.939 "block_size": 512, 00:27:32.939 "cluster_size": 1073741824 00:27:32.939 } 00:27:32.939 ]' 00:27:32.939 01:48:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="5c186767-4cc3-40bd-8dbd-e6df1c4109d1") .free_clusters' 00:27:32.939 01:48:46 -- common/autotest_common.sh@1348 -- # fc=930 00:27:32.939 01:48:46 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="5c186767-4cc3-40bd-8dbd-e6df1c4109d1") .cluster_size' 00:27:32.939 01:48:46 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:32.939 01:48:46 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:32.939 01:48:46 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:32.939 952320 00:27:32.939 01:48:46 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:33.508 0ed5ebc1-2aaf-4dca-937e-5f0cbe979b31 00:27:33.508 01:48:46 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:33.765 01:48:46 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:34.024 01:48:46 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:34.290 01:48:47 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.290 01:48:47 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.290 01:48:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:34.290 01:48:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:34.290 01:48:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:34.290 01:48:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.290 01:48:47 -- common/autotest_common.sh@1320 -- # shift 00:27:34.290 01:48:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:34.290 01:48:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:34.290 01:48:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:34.290 01:48:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:34.290 01:48:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:34.290 01:48:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:34.290 01:48:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:34.290 01:48:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:34.549 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:34.549 fio-3.35 00:27:34.549 Starting 1 thread 00:27:34.549 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.077 00:27:37.077 test: (groupid=0, jobs=1): err= 0: pid=3879712: Tue Jul 23 01:48:49 2024 00:27:37.077 read: IOPS=6404, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2008msec) 00:27:37.077 slat (nsec): min=1924, max=147375, avg=2521.53, stdev=1876.52 00:27:37.077 clat (usec): min=995, max=170994, avg=11021.89, stdev=11321.55 00:27:37.077 lat (usec): min=998, max=171029, avg=11024.41, stdev=11321.80 00:27:37.077 clat percentiles (msec): 00:27:37.077 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:27:37.077 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:37.077 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:27:37.077 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:37.077 | 99.99th=[ 171] 00:27:37.077 bw ( KiB/s): min=18104, max=28296, per=99.83%, avg=25574.00, stdev=4982.83, samples=4 00:27:37.077 iops : min= 4526, max= 7074, avg=6393.50, stdev=1245.71, samples=4 00:27:37.077 write: IOPS=6404, BW=25.0MiB/s (26.2MB/s)(50.2MiB/2008msec); 0 zone resets 00:27:37.077 slat (nsec): min=2119, max=90241, avg=2664.94, stdev=1463.31 00:27:37.077 clat (usec): min=369, max=169065, avg=8825.45, stdev=10606.25 00:27:37.077 lat (usec): min=372, max=169070, avg=8828.12, stdev=10606.46 00:27:37.077 clat percentiles (msec): 00:27:37.077 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:27:37.077 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:27:37.077 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:27:37.077 | 99.00th=[ 10], 99.50th=[ 14], 99.90th=[ 169], 99.95th=[ 169], 00:27:37.077 | 99.99th=[ 169] 00:27:37.077 bw ( KiB/s): min=19112, max=27904, per=99.97%, avg=25610.00, stdev=4333.44, samples=4 00:27:37.077 iops : min= 4778, max= 6976, avg=6402.50, stdev=1083.36, samples=4 00:27:37.077 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:27:37.077 lat (msec) : 2=0.03%, 4=0.14%, 10=69.07%, 20=30.23%, 250=0.50% 00:27:37.077 cpu : usr=57.40%, sys=38.27%, ctx=42, majf=0, minf=19 00:27:37.077 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:37.077 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:37.077 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:37.077 issued rwts: total=12860,12860,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:37.077 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:37.077 00:27:37.077 Run status group 0 (all jobs): 00:27:37.077 READ: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.7MB), run=2008-2008msec 00:27:37.077 WRITE: bw=25.0MiB/s (26.2MB/s), 25.0MiB/s-25.0MiB/s (26.2MB/s-26.2MB/s), io=50.2MiB (52.7MB), run=2008-2008msec 00:27:37.077 01:48:49 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:37.077 01:48:50 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:38.452 01:48:51 -- host/fio.sh@64 -- # ls_nested_guid=ae0d59fe-b09e-4b1a-a82b-9b3562048528 00:27:38.452 01:48:51 -- host/fio.sh@65 -- # get_lvs_free_mb ae0d59fe-b09e-4b1a-a82b-9b3562048528 00:27:38.452 01:48:51 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ae0d59fe-b09e-4b1a-a82b-9b3562048528 00:27:38.452 01:48:51 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:38.452 01:48:51 -- common/autotest_common.sh@1345 -- # local fc 00:27:38.452 01:48:51 -- common/autotest_common.sh@1346 -- # local cs 00:27:38.452 01:48:51 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:38.452 01:48:51 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:38.452 { 00:27:38.452 "uuid": "5c186767-4cc3-40bd-8dbd-e6df1c4109d1", 00:27:38.452 "name": "lvs_0", 00:27:38.452 "base_bdev": "Nvme0n1", 00:27:38.452 "total_data_clusters": 930, 00:27:38.452 "free_clusters": 0, 00:27:38.452 "block_size": 512, 00:27:38.452 "cluster_size": 1073741824 00:27:38.452 }, 00:27:38.452 { 00:27:38.452 "uuid": "ae0d59fe-b09e-4b1a-a82b-9b3562048528", 00:27:38.452 "name": "lvs_n_0", 00:27:38.452 "base_bdev": "0ed5ebc1-2aaf-4dca-937e-5f0cbe979b31", 00:27:38.452 "total_data_clusters": 237847, 00:27:38.452 "free_clusters": 237847, 00:27:38.452 "block_size": 512, 00:27:38.452 "cluster_size": 4194304 00:27:38.452 } 00:27:38.452 ]' 00:27:38.452 01:48:51 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ae0d59fe-b09e-4b1a-a82b-9b3562048528") .free_clusters' 00:27:38.452 01:48:51 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:38.452 01:48:51 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ae0d59fe-b09e-4b1a-a82b-9b3562048528") .cluster_size' 00:27:38.452 01:48:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:38.452 01:48:51 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:38.452 01:48:51 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:38.452 951388 00:27:38.452 01:48:51 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:39.387 0b8dee38-9ffa-4ca4-a2e5-b42425777209 00:27:39.387 01:48:52 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:39.387 01:48:52 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:39.645 01:48:52 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:39.903 01:48:52 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:39.903 01:48:52 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:39.903 01:48:52 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:39.903 01:48:52 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:39.903 01:48:52 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:39.903 01:48:52 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:39.903 01:48:52 -- common/autotest_common.sh@1320 -- # shift 00:27:39.903 01:48:52 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:39.903 01:48:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:39.903 01:48:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:39.903 01:48:52 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:39.903 01:48:52 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:39.903 01:48:52 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:39.903 01:48:52 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:39.903 01:48:52 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:40.161 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:40.161 fio-3.35 00:27:40.161 Starting 1 thread 00:27:40.161 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.686 00:27:42.686 test: (groupid=0, jobs=1): err= 0: pid=3880487: Tue Jul 23 01:48:55 2024 00:27:42.686 read: IOPS=5420, BW=21.2MiB/s (22.2MB/s)(42.5MiB/2009msec) 00:27:42.686 slat (nsec): min=1984, max=169790, avg=2604.01, stdev=2599.84 00:27:42.686 clat (usec): min=5273, max=20150, avg=13102.11, stdev=1068.96 00:27:42.686 lat (usec): min=5299, max=20153, avg=13104.71, stdev=1068.83 00:27:42.686 clat percentiles (usec): 00:27:42.686 | 1.00th=[10683], 5.00th=[11469], 10.00th=[11863], 20.00th=[12256], 00:27:42.686 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:27:42.686 | 70.00th=[13698], 80.00th=[13960], 90.00th=[14353], 95.00th=[14746], 00:27:42.686 | 99.00th=[15533], 99.50th=[15664], 99.90th=[18744], 99.95th=[20055], 00:27:42.686 | 99.99th=[20055] 00:27:42.686 bw ( KiB/s): min=20536, max=22152, per=99.84%, avg=21646.00, stdev=755.94, samples=4 00:27:42.686 iops : min= 5134, max= 5538, avg=5411.50, stdev=188.99, samples=4 00:27:42.686 write: IOPS=5403, BW=21.1MiB/s (22.1MB/s)(42.4MiB/2009msec); 0 zone resets 00:27:42.686 slat (usec): min=2, max=129, avg= 2.71, stdev= 1.88 00:27:42.686 clat (usec): min=3091, max=18430, avg=10365.49, stdev=948.51 00:27:42.686 lat (usec): min=3098, max=18433, avg=10368.19, stdev=948.46 00:27:42.686 clat percentiles (usec): 00:27:42.686 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:27:42.686 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10552], 00:27:42.686 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11469], 95.00th=[11731], 00:27:42.686 | 99.00th=[12518], 99.50th=[12780], 99.90th=[17433], 99.95th=[17433], 00:27:42.686 | 99.99th=[18482] 00:27:42.686 bw ( KiB/s): min=21376, max=21992, per=99.89%, avg=21590.00, stdev=284.90, samples=4 00:27:42.686 iops : min= 5344, max= 5498, avg=5397.50, stdev=71.22, samples=4 00:27:42.686 lat (msec) : 4=0.01%, 10=16.83%, 20=83.14%, 50=0.02% 00:27:42.686 cpu : usr=51.54%, sys=43.68%, ctx=123, majf=0, minf=19 00:27:42.686 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:42.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:42.686 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:42.686 issued rwts: total=10889,10856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:42.686 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:42.686 00:27:42.686 Run status group 0 (all jobs): 00:27:42.686 READ: bw=21.2MiB/s (22.2MB/s), 21.2MiB/s-21.2MiB/s (22.2MB/s-22.2MB/s), io=42.5MiB (44.6MB), run=2009-2009msec 00:27:42.686 WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=42.4MiB (44.5MB), run=2009-2009msec 00:27:42.686 01:48:55 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:42.686 01:48:55 -- host/fio.sh@74 -- # sync 00:27:42.686 01:48:55 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:46.871 01:48:59 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:46.871 01:48:59 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:50.157 01:49:02 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:50.157 01:49:02 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:52.089 01:49:04 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:52.089 01:49:04 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:52.089 01:49:04 -- host/fio.sh@86 -- # nvmftestfini 00:27:52.089 01:49:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:52.089 01:49:04 -- nvmf/common.sh@116 -- # sync 00:27:52.089 01:49:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:52.089 01:49:04 -- nvmf/common.sh@119 -- # set +e 00:27:52.089 01:49:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:52.089 01:49:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:52.090 rmmod nvme_tcp 00:27:52.090 rmmod nvme_fabrics 00:27:52.090 rmmod nvme_keyring 00:27:52.090 01:49:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:52.090 01:49:04 -- nvmf/common.sh@123 -- # set -e 00:27:52.090 01:49:04 -- nvmf/common.sh@124 -- # return 0 00:27:52.090 01:49:04 -- nvmf/common.sh@477 -- # '[' -n 3877479 ']' 00:27:52.090 01:49:04 -- nvmf/common.sh@478 -- # killprocess 3877479 00:27:52.090 01:49:04 -- common/autotest_common.sh@926 -- # '[' -z 3877479 ']' 00:27:52.090 01:49:04 -- common/autotest_common.sh@930 -- # kill -0 3877479 00:27:52.090 01:49:04 -- common/autotest_common.sh@931 -- # uname 00:27:52.090 01:49:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:52.090 01:49:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3877479 00:27:52.090 01:49:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:52.090 01:49:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:52.090 01:49:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3877479' 00:27:52.090 killing process with pid 3877479 00:27:52.090 01:49:04 -- common/autotest_common.sh@945 -- # kill 3877479 00:27:52.090 01:49:04 -- common/autotest_common.sh@950 -- # wait 3877479 00:27:52.090 01:49:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:52.090 01:49:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:52.090 01:49:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:52.090 01:49:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:52.090 01:49:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:52.090 01:49:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.090 01:49:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.090 01:49:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.623 01:49:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:54.623 00:27:54.623 real 0m37.439s 00:27:54.623 user 2m22.286s 00:27:54.623 sys 0m7.824s 00:27:54.623 01:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:54.623 01:49:07 -- common/autotest_common.sh@10 -- # set +x 00:27:54.623 ************************************ 00:27:54.623 END TEST nvmf_fio_host 00:27:54.623 ************************************ 00:27:54.623 01:49:07 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:54.623 01:49:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:54.623 01:49:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:54.623 01:49:07 -- common/autotest_common.sh@10 -- # set +x 00:27:54.623 ************************************ 00:27:54.623 START TEST nvmf_failover 00:27:54.623 ************************************ 00:27:54.623 01:49:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:54.623 * Looking for test storage... 00:27:54.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:54.623 01:49:07 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.623 01:49:07 -- nvmf/common.sh@7 -- # uname -s 00:27:54.623 01:49:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.623 01:49:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.623 01:49:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.623 01:49:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.623 01:49:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.623 01:49:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.623 01:49:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.623 01:49:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.623 01:49:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.623 01:49:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.623 01:49:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:54.623 01:49:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:54.623 01:49:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.623 01:49:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.623 01:49:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.623 01:49:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.623 01:49:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.623 01:49:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.623 01:49:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.623 01:49:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.623 01:49:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.624 01:49:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.624 01:49:07 -- paths/export.sh@5 -- # export PATH 00:27:54.624 01:49:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.624 01:49:07 -- nvmf/common.sh@46 -- # : 0 00:27:54.624 01:49:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:54.624 01:49:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:54.624 01:49:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:54.624 01:49:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.624 01:49:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.624 01:49:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:54.624 01:49:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:54.624 01:49:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:54.624 01:49:07 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:54.624 01:49:07 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:54.624 01:49:07 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:54.624 01:49:07 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:54.624 01:49:07 -- host/failover.sh@18 -- # nvmftestinit 00:27:54.624 01:49:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:54.624 01:49:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:54.624 01:49:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:54.624 01:49:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:54.624 01:49:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:54.624 01:49:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.624 01:49:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.624 01:49:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.624 01:49:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:54.624 01:49:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:54.624 01:49:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:54.624 01:49:07 -- common/autotest_common.sh@10 -- # set +x 00:27:56.527 01:49:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:56.527 01:49:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:56.527 01:49:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:56.528 01:49:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:56.528 01:49:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:56.528 01:49:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:56.528 01:49:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:56.528 01:49:09 -- nvmf/common.sh@294 -- # net_devs=() 00:27:56.528 01:49:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:56.528 01:49:09 -- nvmf/common.sh@295 -- # e810=() 00:27:56.528 01:49:09 -- nvmf/common.sh@295 -- # local -ga e810 00:27:56.528 01:49:09 -- nvmf/common.sh@296 -- # x722=() 00:27:56.528 01:49:09 -- nvmf/common.sh@296 -- # local -ga x722 00:27:56.528 01:49:09 -- nvmf/common.sh@297 -- # mlx=() 00:27:56.528 01:49:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:56.528 01:49:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.528 01:49:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.528 01:49:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:56.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:56.528 01:49:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:56.528 01:49:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:56.528 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:56.528 01:49:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.528 01:49:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.528 01:49:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.528 01:49:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:56.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:56.528 01:49:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:56.528 01:49:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.528 01:49:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.528 01:49:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:56.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:56.528 01:49:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:56.528 01:49:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:56.528 01:49:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.528 01:49:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.528 01:49:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:56.528 01:49:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.528 01:49:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.528 01:49:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:56.528 01:49:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.528 01:49:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.528 01:49:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:56.528 01:49:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:56.528 01:49:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.528 01:49:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.528 01:49:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.528 01:49:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.528 01:49:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:56.528 01:49:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.528 01:49:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.528 01:49:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.528 01:49:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:56.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:27:56.528 00:27:56.528 --- 10.0.0.2 ping statistics --- 00:27:56.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.528 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:27:56.528 01:49:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:27:56.528 00:27:56.528 --- 10.0.0.1 ping statistics --- 00:27:56.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.528 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:27:56.528 01:49:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.528 01:49:09 -- nvmf/common.sh@410 -- # return 0 00:27:56.528 01:49:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:56.528 01:49:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.528 01:49:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:56.528 01:49:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.528 01:49:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:56.528 01:49:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:56.528 01:49:09 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:56.528 01:49:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:56.528 01:49:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:56.528 01:49:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.528 01:49:09 -- nvmf/common.sh@469 -- # nvmfpid=3883813 00:27:56.528 01:49:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:56.528 01:49:09 -- nvmf/common.sh@470 -- # waitforlisten 3883813 00:27:56.528 01:49:09 -- common/autotest_common.sh@819 -- # '[' -z 3883813 ']' 00:27:56.528 01:49:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.528 01:49:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:56.528 01:49:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.528 01:49:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:56.528 01:49:09 -- common/autotest_common.sh@10 -- # set +x 00:27:56.528 [2024-07-23 01:49:09.419878] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:56.528 [2024-07-23 01:49:09.419972] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.528 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.528 [2024-07-23 01:49:09.484541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:56.528 [2024-07-23 01:49:09.571605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:56.528 [2024-07-23 01:49:09.571763] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.528 [2024-07-23 01:49:09.571781] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.528 [2024-07-23 01:49:09.571793] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.528 [2024-07-23 01:49:09.571903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:56.528 [2024-07-23 01:49:09.571955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:56.528 [2024-07-23 01:49:09.571958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.464 01:49:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:57.464 01:49:10 -- common/autotest_common.sh@852 -- # return 0 00:27:57.464 01:49:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:57.464 01:49:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:57.464 01:49:10 -- common/autotest_common.sh@10 -- # set +x 00:27:57.464 01:49:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.464 01:49:10 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:57.722 [2024-07-23 01:49:10.615513] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.722 01:49:10 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:57.980 Malloc0 00:27:57.980 01:49:10 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:58.238 01:49:11 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:58.496 01:49:11 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.754 [2024-07-23 01:49:11.604856] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.754 01:49:11 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:58.754 [2024-07-23 01:49:11.833490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:58.754 01:49:11 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:59.013 [2024-07-23 01:49:12.066300] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:59.013 01:49:12 -- host/failover.sh@31 -- # bdevperf_pid=3884121 00:27:59.013 01:49:12 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:59.013 01:49:12 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:59.013 01:49:12 -- host/failover.sh@34 -- # waitforlisten 3884121 /var/tmp/bdevperf.sock 00:27:59.013 01:49:12 -- common/autotest_common.sh@819 -- # '[' -z 3884121 ']' 00:27:59.013 01:49:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.013 01:49:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.013 01:49:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.013 01:49:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.013 01:49:12 -- common/autotest_common.sh@10 -- # set +x 00:28:00.399 01:49:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:00.399 01:49:13 -- common/autotest_common.sh@852 -- # return 0 00:28:00.399 01:49:13 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.399 NVMe0n1 00:28:00.399 01:49:13 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.660 00:28:00.919 01:49:13 -- host/failover.sh@39 -- # run_test_pid=3884384 00:28:00.919 01:49:13 -- host/failover.sh@41 -- # sleep 1 00:28:00.919 01:49:13 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:01.855 01:49:14 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:02.115 [2024-07-23 01:49:14.994306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 [2024-07-23 01:49:14.994958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b540 is same with the state(5) to be set 00:28:02.115 01:49:15 -- host/failover.sh@45 -- # sleep 3 00:28:05.404 01:49:18 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:05.404 00:28:05.404 01:49:18 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:05.970 [2024-07-23 01:49:18.767121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.970 [2024-07-23 01:49:18.767205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767358] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767687] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.767991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 [2024-07-23 01:49:18.768059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1c3d0 is same with the state(5) to be set 00:28:05.971 01:49:18 -- host/failover.sh@50 -- # sleep 3 00:28:09.255 01:49:21 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.255 [2024-07-23 01:49:22.008249] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.255 01:49:22 -- host/failover.sh@55 -- # sleep 1 00:28:10.190 01:49:23 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:10.190 [2024-07-23 01:49:23.254578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.254990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255025] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.190 [2024-07-23 01:49:23.255059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 [2024-07-23 01:49:23.255166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cf40 is same with the state(5) to be set 00:28:10.191 01:49:23 -- host/failover.sh@59 -- # wait 3884384 00:28:16.771 0 00:28:16.771 01:49:28 -- host/failover.sh@61 -- # killprocess 3884121 00:28:16.771 01:49:28 -- common/autotest_common.sh@926 -- # '[' -z 3884121 ']' 00:28:16.771 01:49:28 -- common/autotest_common.sh@930 -- # kill -0 3884121 00:28:16.771 01:49:28 -- common/autotest_common.sh@931 -- # uname 00:28:16.771 01:49:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:16.771 01:49:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3884121 00:28:16.771 01:49:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:16.771 01:49:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:16.771 01:49:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3884121' 00:28:16.771 killing process with pid 3884121 00:28:16.771 01:49:28 -- common/autotest_common.sh@945 -- # kill 3884121 00:28:16.771 01:49:28 -- common/autotest_common.sh@950 -- # wait 3884121 00:28:16.771 01:49:29 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:16.771 [2024-07-23 01:49:12.122108] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:16.771 [2024-07-23 01:49:12.122207] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3884121 ] 00:28:16.771 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.771 [2024-07-23 01:49:12.184164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.771 [2024-07-23 01:49:12.268826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.771 Running I/O for 15 seconds... 00:28:16.771 [2024-07-23 01:49:14.995282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.771 [2024-07-23 01:49:14.995627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.771 [2024-07-23 01:49:14.995645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:115856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.995967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.995981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:116576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:116592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:116632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.772 [2024-07-23 01:49:14.996705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.772 [2024-07-23 01:49:14.996793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.772 [2024-07-23 01:49:14.996808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.996831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.996848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.996862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.996877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.996891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.996921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.996935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.996950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.996963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.996977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.996990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:116712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:116776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.773 [2024-07-23 01:49:14.997886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.773 [2024-07-23 01:49:14.997959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.773 [2024-07-23 01:49:14.997974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:116840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.997990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:116872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:116880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:116952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:116984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.998938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.998982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.998997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.774 [2024-07-23 01:49:14.999010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.999025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.999038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.999053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.999081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.999094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.999110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.774 [2024-07-23 01:49:14.999122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.774 [2024-07-23 01:49:14.999140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:14.999155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:14.999183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:14.999210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a7320 is same with the state(5) to be set 00:28:16.775 [2024-07-23 01:49:14.999241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.775 [2024-07-23 01:49:14.999252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.775 [2024-07-23 01:49:14.999267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116544 len:8 PRP1 0x0 PRP2 0x0 00:28:16.775 [2024-07-23 01:49:14.999280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999343] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14a7320 was disconnected and freed. reset controller. 00:28:16.775 [2024-07-23 01:49:14.999374] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:16.775 [2024-07-23 01:49:14.999423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:14.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:14.999471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:14.999498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:14.999525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:14.999538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.775 [2024-07-23 01:49:15.001756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.775 [2024-07-23 01:49:15.001797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488790 (9): Bad file descriptor 00:28:16.775 [2024-07-23 01:49:15.034972] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.775 [2024-07-23 01:49:18.767379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:18.767425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.767453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:18.767469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.767483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:18.767497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.767511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.775 [2024-07-23 01:49:18.767524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.767537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1488790 is same with the state(5) to be set 00:28:16.775 [2024-07-23 01:49:18.768232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.775 [2024-07-23 01:49:18.768957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.775 [2024-07-23 01:49:18.768975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:124432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.776 [2024-07-23 01:49:18.769575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.776 [2024-07-23 01:49:18.769602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.776 [2024-07-23 01:49:18.769685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.776 [2024-07-23 01:49:18.769700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.769888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.769916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.769960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.769974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.769987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.777 [2024-07-23 01:49:18.770807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.777 [2024-07-23 01:49:18.770821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.777 [2024-07-23 01:49:18.770839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.770854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.770869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.770883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.770898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.770931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.770946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.770962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.770990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.778 [2024-07-23 01:49:18.771875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:125024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:125032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.771978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.771993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.772006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.778 [2024-07-23 01:49:18.772021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.778 [2024-07-23 01:49:18.772038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:18.772067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:18.772094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:18.772123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:18.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b4f40 is same with the state(5) to be set 00:28:16.779 [2024-07-23 01:49:18.772181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.779 [2024-07-23 01:49:18.772193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.779 [2024-07-23 01:49:18.772204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124536 len:8 PRP1 0x0 PRP2 0x0 00:28:16.779 [2024-07-23 01:49:18.772217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:18.772278] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14b4f40 was disconnected and freed. reset controller. 00:28:16.779 [2024-07-23 01:49:18.772298] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:16.779 [2024-07-23 01:49:18.772312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.779 [2024-07-23 01:49:18.774385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.779 [2024-07-23 01:49:18.774426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488790 (9): Bad file descriptor 00:28:16.779 [2024-07-23 01:49:18.898896] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.779 [2024-07-23 01:49:23.255341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.779 [2024-07-23 01:49:23.255891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.255963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.255978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.779 [2024-07-23 01:49:23.255991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.779 [2024-07-23 01:49:23.256129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.779 [2024-07-23 01:49:23.256249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.779 [2024-07-23 01:49:23.256264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.256453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.256482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.256538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.256650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.256945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.256973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.256987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:110472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.257031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.780 [2024-07-23 01:49:23.257339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.257367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.780 [2024-07-23 01:49:23.257399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.780 [2024-07-23 01:49:23.257413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.781 [2024-07-23 01:49:23.257426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.781 [2024-07-23 01:49:23.257440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.781 [2024-07-23 01:49:23.257453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.781 [2024-07-23 01:49:23.257468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.257481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.257508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.257845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.257948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.257963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.257991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.782 [2024-07-23 01:49:23.258555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:110760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.782 [2024-07-23 01:49:23.258597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.782 [2024-07-23 01:49:23.258610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:110776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.258752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:110808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:110824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.258839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:110832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.258896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:110848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.258943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.258973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.258987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:16.783 [2024-07-23 01:49:23.259000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:110872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:16.783 [2024-07-23 01:49:23.259237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1494cb0 is same with the state(5) to be set 00:28:16.783 [2024-07-23 01:49:23.259268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:16.783 [2024-07-23 01:49:23.259279] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:16.783 [2024-07-23 01:49:23.259291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110296 len:8 PRP1 0x0 PRP2 0x0 00:28:16.783 [2024-07-23 01:49:23.259304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259364] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1494cb0 was disconnected and freed. reset controller. 00:28:16.783 [2024-07-23 01:49:23.259382] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:16.783 [2024-07-23 01:49:23.259414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.783 [2024-07-23 01:49:23.259448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.783 [2024-07-23 01:49:23.259477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.783 [2024-07-23 01:49:23.259505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:16.783 [2024-07-23 01:49:23.259532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:16.783 [2024-07-23 01:49:23.259546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:16.783 [2024-07-23 01:49:23.259583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1488790 (9): Bad file descriptor 00:28:16.783 [2024-07-23 01:49:23.261753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:16.783 [2024-07-23 01:49:23.334398] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:16.783 00:28:16.783 Latency(us) 00:28:16.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.783 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:16.783 Verification LBA range: start 0x0 length 0x4000 00:28:16.783 NVMe0n1 : 15.01 12801.72 50.01 883.64 0.00 9335.45 879.88 15146.10 00:28:16.783 =================================================================================================================== 00:28:16.783 Total : 12801.72 50.01 883.64 0.00 9335.45 879.88 15146.10 00:28:16.783 Received shutdown signal, test time was about 15.000000 seconds 00:28:16.783 00:28:16.783 Latency(us) 00:28:16.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:16.783 =================================================================================================================== 00:28:16.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:16.783 01:49:29 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:16.783 01:49:29 -- host/failover.sh@65 -- # count=3 00:28:16.783 01:49:29 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:16.783 01:49:29 -- host/failover.sh@73 -- # bdevperf_pid=3886167 00:28:16.783 01:49:29 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:16.783 01:49:29 -- host/failover.sh@75 -- # waitforlisten 3886167 /var/tmp/bdevperf.sock 00:28:16.783 01:49:29 -- common/autotest_common.sh@819 -- # '[' -z 3886167 ']' 00:28:16.783 01:49:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:16.783 01:49:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:16.783 01:49:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:16.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:16.783 01:49:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:16.783 01:49:29 -- common/autotest_common.sh@10 -- # set +x 00:28:17.097 01:49:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:17.097 01:49:30 -- common/autotest_common.sh@852 -- # return 0 00:28:17.097 01:49:30 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:17.355 [2024-07-23 01:49:30.386667] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:17.355 01:49:30 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:17.614 [2024-07-23 01:49:30.643421] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:17.614 01:49:30 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.184 NVMe0n1 00:28:18.184 01:49:31 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.442 00:28:18.442 01:49:31 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:18.700 00:28:18.700 01:49:31 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.700 01:49:31 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:18.958 01:49:31 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:19.216 01:49:32 -- host/failover.sh@87 -- # sleep 3 00:28:22.506 01:49:35 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:22.506 01:49:35 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:22.506 01:49:35 -- host/failover.sh@90 -- # run_test_pid=3886990 00:28:22.506 01:49:35 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:22.506 01:49:35 -- host/failover.sh@92 -- # wait 3886990 00:28:23.882 0 00:28:23.882 01:49:36 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:23.882 [2024-07-23 01:49:29.222015] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:23.882 [2024-07-23 01:49:29.222113] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3886167 ] 00:28:23.882 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.882 [2024-07-23 01:49:29.281562] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.882 [2024-07-23 01:49:29.363946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.882 [2024-07-23 01:49:32.143806] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:23.882 [2024-07-23 01:49:32.143886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.882 [2024-07-23 01:49:32.143909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.882 [2024-07-23 01:49:32.143925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.882 [2024-07-23 01:49:32.143939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.882 [2024-07-23 01:49:32.143953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.882 [2024-07-23 01:49:32.143966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.882 [2024-07-23 01:49:32.143981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:23.882 [2024-07-23 01:49:32.143995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:23.882 [2024-07-23 01:49:32.144010] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:23.882 [2024-07-23 01:49:32.144048] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:23.882 [2024-07-23 01:49:32.144080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1baf790 (9): Bad file descriptor 00:28:23.882 [2024-07-23 01:49:32.246189] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:23.882 Running I/O for 1 seconds... 00:28:23.882 00:28:23.882 Latency(us) 00:28:23.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:23.882 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:23.882 Verification LBA range: start 0x0 length 0x4000 00:28:23.882 NVMe0n1 : 1.01 9265.02 36.19 0.00 0.00 13742.76 2087.44 19418.07 00:28:23.882 =================================================================================================================== 00:28:23.882 Total : 9265.02 36.19 0.00 0.00 13742.76 2087.44 19418.07 00:28:23.882 01:49:36 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:23.882 01:49:36 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:23.882 01:49:36 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.140 01:49:37 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:24.140 01:49:37 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:24.398 01:49:37 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.656 01:49:37 -- host/failover.sh@101 -- # sleep 3 00:28:27.941 01:49:40 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:27.941 01:49:40 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:27.941 01:49:40 -- host/failover.sh@108 -- # killprocess 3886167 00:28:27.941 01:49:40 -- common/autotest_common.sh@926 -- # '[' -z 3886167 ']' 00:28:27.941 01:49:40 -- common/autotest_common.sh@930 -- # kill -0 3886167 00:28:27.941 01:49:40 -- common/autotest_common.sh@931 -- # uname 00:28:27.941 01:49:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:27.941 01:49:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3886167 00:28:27.941 01:49:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:27.941 01:49:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:27.941 01:49:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3886167' 00:28:27.941 killing process with pid 3886167 00:28:27.941 01:49:40 -- common/autotest_common.sh@945 -- # kill 3886167 00:28:27.941 01:49:40 -- common/autotest_common.sh@950 -- # wait 3886167 00:28:28.198 01:49:41 -- host/failover.sh@110 -- # sync 00:28:28.198 01:49:41 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.456 01:49:41 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:28.456 01:49:41 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:28.456 01:49:41 -- host/failover.sh@116 -- # nvmftestfini 00:28:28.456 01:49:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:28.456 01:49:41 -- nvmf/common.sh@116 -- # sync 00:28:28.456 01:49:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:28.456 01:49:41 -- nvmf/common.sh@119 -- # set +e 00:28:28.456 01:49:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:28.456 01:49:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:28.456 rmmod nvme_tcp 00:28:28.456 rmmod nvme_fabrics 00:28:28.456 rmmod nvme_keyring 00:28:28.456 01:49:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:28.456 01:49:41 -- nvmf/common.sh@123 -- # set -e 00:28:28.456 01:49:41 -- nvmf/common.sh@124 -- # return 0 00:28:28.456 01:49:41 -- nvmf/common.sh@477 -- # '[' -n 3883813 ']' 00:28:28.456 01:49:41 -- nvmf/common.sh@478 -- # killprocess 3883813 00:28:28.456 01:49:41 -- common/autotest_common.sh@926 -- # '[' -z 3883813 ']' 00:28:28.456 01:49:41 -- common/autotest_common.sh@930 -- # kill -0 3883813 00:28:28.456 01:49:41 -- common/autotest_common.sh@931 -- # uname 00:28:28.456 01:49:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:28.456 01:49:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3883813 00:28:28.456 01:49:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:28.456 01:49:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:28.456 01:49:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3883813' 00:28:28.456 killing process with pid 3883813 00:28:28.456 01:49:41 -- common/autotest_common.sh@945 -- # kill 3883813 00:28:28.456 01:49:41 -- common/autotest_common.sh@950 -- # wait 3883813 00:28:28.713 01:49:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:28.713 01:49:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:28.713 01:49:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:28.713 01:49:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.713 01:49:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:28.713 01:49:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.713 01:49:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.713 01:49:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.248 01:49:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:31.248 00:28:31.248 real 0m36.486s 00:28:31.248 user 2m8.426s 00:28:31.248 sys 0m6.240s 00:28:31.248 01:49:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:31.248 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:28:31.248 ************************************ 00:28:31.248 END TEST nvmf_failover 00:28:31.248 ************************************ 00:28:31.248 01:49:43 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:31.248 01:49:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:31.248 01:49:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:31.248 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:28:31.248 ************************************ 00:28:31.248 START TEST nvmf_discovery 00:28:31.248 ************************************ 00:28:31.248 01:49:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:31.248 * Looking for test storage... 00:28:31.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.248 01:49:43 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.248 01:49:43 -- nvmf/common.sh@7 -- # uname -s 00:28:31.248 01:49:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.248 01:49:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.248 01:49:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.248 01:49:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.248 01:49:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.248 01:49:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.248 01:49:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.248 01:49:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.248 01:49:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.248 01:49:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.248 01:49:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:31.248 01:49:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:31.248 01:49:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.248 01:49:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.248 01:49:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.248 01:49:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.248 01:49:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.248 01:49:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.248 01:49:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.248 01:49:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.248 01:49:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.249 01:49:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.249 01:49:43 -- paths/export.sh@5 -- # export PATH 00:28:31.249 01:49:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.249 01:49:43 -- nvmf/common.sh@46 -- # : 0 00:28:31.249 01:49:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:31.249 01:49:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:31.249 01:49:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:31.249 01:49:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.249 01:49:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.249 01:49:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:31.249 01:49:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:31.249 01:49:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:31.249 01:49:43 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:31.249 01:49:43 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:31.249 01:49:43 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:31.249 01:49:43 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:31.249 01:49:43 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:31.249 01:49:43 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:31.249 01:49:43 -- host/discovery.sh@25 -- # nvmftestinit 00:28:31.249 01:49:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:31.249 01:49:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.249 01:49:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:31.249 01:49:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:31.249 01:49:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:31.249 01:49:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.249 01:49:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.249 01:49:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.249 01:49:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:31.249 01:49:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:31.249 01:49:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:31.249 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:28:32.624 01:49:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:32.624 01:49:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:32.624 01:49:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:32.624 01:49:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:32.624 01:49:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:32.624 01:49:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:32.624 01:49:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:32.624 01:49:45 -- nvmf/common.sh@294 -- # net_devs=() 00:28:32.624 01:49:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:32.624 01:49:45 -- nvmf/common.sh@295 -- # e810=() 00:28:32.624 01:49:45 -- nvmf/common.sh@295 -- # local -ga e810 00:28:32.624 01:49:45 -- nvmf/common.sh@296 -- # x722=() 00:28:32.624 01:49:45 -- nvmf/common.sh@296 -- # local -ga x722 00:28:32.624 01:49:45 -- nvmf/common.sh@297 -- # mlx=() 00:28:32.624 01:49:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:32.624 01:49:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.624 01:49:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:32.624 01:49:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:32.624 01:49:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:32.624 01:49:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:32.624 01:49:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.624 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.624 01:49:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:32.624 01:49:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:32.624 01:49:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.625 01:49:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:32.625 01:49:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:32.625 01:49:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.625 01:49:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:32.625 01:49:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.625 01:49:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.625 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.625 01:49:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.625 01:49:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:32.625 01:49:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.625 01:49:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:32.625 01:49:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.625 01:49:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.625 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.625 01:49:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.625 01:49:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:32.625 01:49:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:32.625 01:49:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:32.625 01:49:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:32.625 01:49:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.625 01:49:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.625 01:49:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.625 01:49:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:32.625 01:49:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.625 01:49:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.625 01:49:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:32.625 01:49:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.625 01:49:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.625 01:49:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:32.625 01:49:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:32.625 01:49:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.625 01:49:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.625 01:49:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.625 01:49:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.884 01:49:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:32.884 01:49:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.884 01:49:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.884 01:49:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.884 01:49:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:32.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:28:32.884 00:28:32.884 --- 10.0.0.2 ping statistics --- 00:28:32.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.884 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:28:32.884 01:49:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:28:32.884 00:28:32.884 --- 10.0.0.1 ping statistics --- 00:28:32.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.884 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:28:32.884 01:49:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.884 01:49:45 -- nvmf/common.sh@410 -- # return 0 00:28:32.884 01:49:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:32.884 01:49:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.884 01:49:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:32.884 01:49:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:32.884 01:49:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.884 01:49:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:32.884 01:49:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:32.884 01:49:45 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:32.884 01:49:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:32.884 01:49:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:32.884 01:49:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.884 01:49:45 -- nvmf/common.sh@469 -- # nvmfpid=3889615 00:28:32.884 01:49:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:32.884 01:49:45 -- nvmf/common.sh@470 -- # waitforlisten 3889615 00:28:32.884 01:49:45 -- common/autotest_common.sh@819 -- # '[' -z 3889615 ']' 00:28:32.884 01:49:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.884 01:49:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:32.884 01:49:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.884 01:49:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:32.884 01:49:45 -- common/autotest_common.sh@10 -- # set +x 00:28:32.884 [2024-07-23 01:49:45.861757] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:32.884 [2024-07-23 01:49:45.861841] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.884 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.884 [2024-07-23 01:49:45.932652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.142 [2024-07-23 01:49:46.021473] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:33.142 [2024-07-23 01:49:46.021672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.142 [2024-07-23 01:49:46.021696] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.142 [2024-07-23 01:49:46.021712] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.142 [2024-07-23 01:49:46.021743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.708 01:49:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:33.708 01:49:46 -- common/autotest_common.sh@852 -- # return 0 00:28:33.708 01:49:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:33.708 01:49:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:33.708 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.708 01:49:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.708 01:49:46 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.708 01:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.708 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.965 [2024-07-23 01:49:46.807780] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.965 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.965 01:49:46 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:33.965 01:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.965 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.965 [2024-07-23 01:49:46.815928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:33.965 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.965 01:49:46 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:33.965 01:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.965 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.965 null0 00:28:33.965 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.965 01:49:46 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:33.966 01:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.966 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.966 null1 00:28:33.966 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.966 01:49:46 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:33.966 01:49:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.966 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.966 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.966 01:49:46 -- host/discovery.sh@45 -- # hostpid=3889771 00:28:33.966 01:49:46 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:33.966 01:49:46 -- host/discovery.sh@46 -- # waitforlisten 3889771 /tmp/host.sock 00:28:33.966 01:49:46 -- common/autotest_common.sh@819 -- # '[' -z 3889771 ']' 00:28:33.966 01:49:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:33.966 01:49:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:33.966 01:49:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:33.966 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:33.966 01:49:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:33.966 01:49:46 -- common/autotest_common.sh@10 -- # set +x 00:28:33.966 [2024-07-23 01:49:46.885492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:33.966 [2024-07-23 01:49:46.885556] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3889771 ] 00:28:33.966 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.966 [2024-07-23 01:49:46.946641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.966 [2024-07-23 01:49:47.035097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:33.966 [2024-07-23 01:49:47.035283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:34.901 01:49:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:34.901 01:49:47 -- common/autotest_common.sh@852 -- # return 0 00:28:34.901 01:49:47 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:34.901 01:49:47 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@72 -- # notify_id=0 00:28:34.901 01:49:47 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # sort 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # xargs 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:34.901 01:49:47 -- host/discovery.sh@79 -- # get_bdev_list 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # sort 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # xargs 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:34.901 01:49:47 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # sort 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- host/discovery.sh@59 -- # xargs 00:28:34.901 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.901 01:49:47 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:34.901 01:49:47 -- host/discovery.sh@83 -- # get_bdev_list 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:34.901 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.901 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:28:34.901 01:49:47 -- host/discovery.sh@55 -- # sort 00:28:34.902 01:49:47 -- host/discovery.sh@55 -- # xargs 00:28:34.902 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:35.160 01:49:48 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # sort 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # xargs 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:35.160 01:49:48 -- host/discovery.sh@87 -- # get_bdev_list 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # sort 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # xargs 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:35.160 01:49:48 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 [2024-07-23 01:49:48.099475] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # sort 00:28:35.160 01:49:48 -- host/discovery.sh@59 -- # xargs 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:35.160 01:49:48 -- host/discovery.sh@93 -- # get_bdev_list 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # sort 00:28:35.160 01:49:48 -- host/discovery.sh@55 -- # xargs 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:35.160 01:49:48 -- host/discovery.sh@94 -- # get_notification_count 00:28:35.160 01:49:48 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- host/discovery.sh@74 -- # jq '. | length' 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@74 -- # notification_count=0 00:28:35.160 01:49:48 -- host/discovery.sh@75 -- # notify_id=0 00:28:35.160 01:49:48 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:35.160 01:49:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.160 01:49:48 -- common/autotest_common.sh@10 -- # set +x 00:28:35.160 01:49:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.160 01:49:48 -- host/discovery.sh@100 -- # sleep 1 00:28:36.098 [2024-07-23 01:49:48.877862] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:36.098 [2024-07-23 01:49:48.877908] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:36.098 [2024-07-23 01:49:48.877932] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:36.098 [2024-07-23 01:49:48.964215] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:36.098 [2024-07-23 01:49:49.148537] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:36.098 [2024-07-23 01:49:49.148571] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:36.358 01:49:49 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:36.358 01:49:49 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:36.358 01:49:49 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:36.358 01:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.358 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:36.358 01:49:49 -- host/discovery.sh@59 -- # sort 00:28:36.358 01:49:49 -- host/discovery.sh@59 -- # xargs 00:28:36.358 01:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.358 01:49:49 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@102 -- # get_bdev_list 00:28:36.359 01:49:49 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.359 01:49:49 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:36.359 01:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.359 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:36.359 01:49:49 -- host/discovery.sh@55 -- # sort 00:28:36.359 01:49:49 -- host/discovery.sh@55 -- # xargs 00:28:36.359 01:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:36.359 01:49:49 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:36.359 01:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.359 01:49:49 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:36.359 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:36.359 01:49:49 -- host/discovery.sh@63 -- # sort -n 00:28:36.359 01:49:49 -- host/discovery.sh@63 -- # xargs 00:28:36.359 01:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@104 -- # get_notification_count 00:28:36.359 01:49:49 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:36.359 01:49:49 -- host/discovery.sh@74 -- # jq '. | length' 00:28:36.359 01:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.359 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:36.359 01:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@74 -- # notification_count=1 00:28:36.359 01:49:49 -- host/discovery.sh@75 -- # notify_id=1 00:28:36.359 01:49:49 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:36.359 01:49:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.359 01:49:49 -- common/autotest_common.sh@10 -- # set +x 00:28:36.359 01:49:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.359 01:49:49 -- host/discovery.sh@109 -- # sleep 1 00:28:37.734 01:49:50 -- host/discovery.sh@110 -- # get_bdev_list 00:28:37.734 01:49:50 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.734 01:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.734 01:49:50 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.734 01:49:50 -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 01:49:50 -- host/discovery.sh@55 -- # sort 00:28:37.734 01:49:50 -- host/discovery.sh@55 -- # xargs 00:28:37.734 01:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.734 01:49:50 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:37.734 01:49:50 -- host/discovery.sh@111 -- # get_notification_count 00:28:37.734 01:49:50 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:37.734 01:49:50 -- host/discovery.sh@74 -- # jq '. | length' 00:28:37.734 01:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.734 01:49:50 -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 01:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.734 01:49:50 -- host/discovery.sh@74 -- # notification_count=1 00:28:37.734 01:49:50 -- host/discovery.sh@75 -- # notify_id=2 00:28:37.734 01:49:50 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:37.734 01:49:50 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:37.734 01:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.734 01:49:50 -- common/autotest_common.sh@10 -- # set +x 00:28:37.734 [2024-07-23 01:49:50.502805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:37.734 [2024-07-23 01:49:50.503687] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:37.734 [2024-07-23 01:49:50.503725] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:37.734 01:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.734 01:49:50 -- host/discovery.sh@117 -- # sleep 1 00:28:37.734 [2024-07-23 01:49:50.630126] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:37.734 [2024-07-23 01:49:50.692763] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:37.734 [2024-07-23 01:49:50.692786] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:37.734 [2024-07-23 01:49:50.692796] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:38.707 01:49:51 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:38.707 01:49:51 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:38.707 01:49:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.707 01:49:51 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:38.707 01:49:51 -- common/autotest_common.sh@10 -- # set +x 00:28:38.707 01:49:51 -- host/discovery.sh@59 -- # sort 00:28:38.707 01:49:51 -- host/discovery.sh@59 -- # xargs 00:28:38.707 01:49:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@119 -- # get_bdev_list 00:28:38.707 01:49:51 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.707 01:49:51 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.707 01:49:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.707 01:49:51 -- common/autotest_common.sh@10 -- # set +x 00:28:38.707 01:49:51 -- host/discovery.sh@55 -- # sort 00:28:38.707 01:49:51 -- host/discovery.sh@55 -- # xargs 00:28:38.707 01:49:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:38.707 01:49:51 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:38.707 01:49:51 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:38.707 01:49:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.707 01:49:51 -- common/autotest_common.sh@10 -- # set +x 00:28:38.707 01:49:51 -- host/discovery.sh@63 -- # sort -n 00:28:38.707 01:49:51 -- host/discovery.sh@63 -- # xargs 00:28:38.707 01:49:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@121 -- # get_notification_count 00:28:38.707 01:49:51 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:38.707 01:49:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.707 01:49:51 -- host/discovery.sh@74 -- # jq '. | length' 00:28:38.707 01:49:51 -- common/autotest_common.sh@10 -- # set +x 00:28:38.707 01:49:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@74 -- # notification_count=0 00:28:38.707 01:49:51 -- host/discovery.sh@75 -- # notify_id=2 00:28:38.707 01:49:51 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.707 01:49:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.707 01:49:51 -- common/autotest_common.sh@10 -- # set +x 00:28:38.707 [2024-07-23 01:49:51.682927] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:38.707 [2024-07-23 01:49:51.682956] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.707 01:49:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.707 01:49:51 -- host/discovery.sh@127 -- # sleep 1 00:28:38.707 [2024-07-23 01:49:51.691801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.707 [2024-07-23 01:49:51.691834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.707 [2024-07-23 01:49:51.691852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.707 [2024-07-23 01:49:51.691867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.707 [2024-07-23 01:49:51.691882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.707 [2024-07-23 01:49:51.691895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.707 [2024-07-23 01:49:51.691913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:38.707 [2024-07-23 01:49:51.691927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:38.707 [2024-07-23 01:49:51.691959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.707 [2024-07-23 01:49:51.701793] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.707 [2024-07-23 01:49:51.711841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.707 [2024-07-23 01:49:51.712095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.707 [2024-07-23 01:49:51.712318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.707 [2024-07-23 01:49:51.712348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.707 [2024-07-23 01:49:51.712366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.707 [2024-07-23 01:49:51.712392] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.707 [2024-07-23 01:49:51.712430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.707 [2024-07-23 01:49:51.712451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.707 [2024-07-23 01:49:51.712468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.707 [2024-07-23 01:49:51.712492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.707 [2024-07-23 01:49:51.721917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.707 [2024-07-23 01:49:51.722163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.707 [2024-07-23 01:49:51.722352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.707 [2024-07-23 01:49:51.722383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.707 [2024-07-23 01:49:51.722401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.708 [2024-07-23 01:49:51.722426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.708 [2024-07-23 01:49:51.722475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.708 [2024-07-23 01:49:51.722497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.708 [2024-07-23 01:49:51.722512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.708 [2024-07-23 01:49:51.722534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.708 [2024-07-23 01:49:51.732008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.708 [2024-07-23 01:49:51.732221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.732404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.732434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.708 [2024-07-23 01:49:51.732453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.708 [2024-07-23 01:49:51.732479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.708 [2024-07-23 01:49:51.732517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.708 [2024-07-23 01:49:51.732538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.708 [2024-07-23 01:49:51.732553] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.708 [2024-07-23 01:49:51.732575] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.708 [2024-07-23 01:49:51.742091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.708 [2024-07-23 01:49:51.742326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.742541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.742571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.708 [2024-07-23 01:49:51.742589] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.708 [2024-07-23 01:49:51.742686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.708 [2024-07-23 01:49:51.742740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.708 [2024-07-23 01:49:51.742760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.708 [2024-07-23 01:49:51.742774] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.708 [2024-07-23 01:49:51.742795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.708 [2024-07-23 01:49:51.752173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.708 [2024-07-23 01:49:51.752386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.752578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.752609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.708 [2024-07-23 01:49:51.752639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.708 [2024-07-23 01:49:51.752679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.708 [2024-07-23 01:49:51.752713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.708 [2024-07-23 01:49:51.752733] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.708 [2024-07-23 01:49:51.752748] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.708 [2024-07-23 01:49:51.752782] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.708 [2024-07-23 01:49:51.762252] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:38.708 [2024-07-23 01:49:51.762459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.762651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:38.708 [2024-07-23 01:49:51.762697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95fb60 with addr=10.0.0.2, port=4420 00:28:38.708 [2024-07-23 01:49:51.762714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x95fb60 is same with the state(5) to be set 00:28:38.708 [2024-07-23 01:49:51.762738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95fb60 (9): Bad file descriptor 00:28:38.708 [2024-07-23 01:49:51.762781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:38.708 [2024-07-23 01:49:51.762801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:38.708 [2024-07-23 01:49:51.762816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:38.708 [2024-07-23 01:49:51.762836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:38.708 [2024-07-23 01:49:51.770796] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:38.708 [2024-07-23 01:49:51.770826] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:39.644 01:49:52 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:39.644 01:49:52 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:39.644 01:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.644 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.644 01:49:52 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:39.644 01:49:52 -- host/discovery.sh@59 -- # sort 00:28:39.644 01:49:52 -- host/discovery.sh@59 -- # xargs 00:28:39.644 01:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.644 01:49:52 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.644 01:49:52 -- host/discovery.sh@129 -- # get_bdev_list 00:28:39.644 01:49:52 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.644 01:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.644 01:49:52 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.644 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.644 01:49:52 -- host/discovery.sh@55 -- # sort 00:28:39.644 01:49:52 -- host/discovery.sh@55 -- # xargs 00:28:39.903 01:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.903 01:49:52 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:39.903 01:49:52 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:39.903 01:49:52 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:39.903 01:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.904 01:49:52 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:39.904 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.904 01:49:52 -- host/discovery.sh@63 -- # sort -n 00:28:39.904 01:49:52 -- host/discovery.sh@63 -- # xargs 00:28:39.904 01:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.904 01:49:52 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:39.904 01:49:52 -- host/discovery.sh@131 -- # get_notification_count 00:28:39.904 01:49:52 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:39.904 01:49:52 -- host/discovery.sh@74 -- # jq '. | length' 00:28:39.904 01:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.904 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.904 01:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.904 01:49:52 -- host/discovery.sh@74 -- # notification_count=0 00:28:39.904 01:49:52 -- host/discovery.sh@75 -- # notify_id=2 00:28:39.904 01:49:52 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:39.904 01:49:52 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:39.904 01:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.904 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:28:39.904 01:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.904 01:49:52 -- host/discovery.sh@135 -- # sleep 1 00:28:40.839 01:49:53 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:40.839 01:49:53 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:40.839 01:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.839 01:49:53 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:40.839 01:49:53 -- common/autotest_common.sh@10 -- # set +x 00:28:40.839 01:49:53 -- host/discovery.sh@59 -- # sort 00:28:40.839 01:49:53 -- host/discovery.sh@59 -- # xargs 00:28:40.839 01:49:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.839 01:49:53 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:40.839 01:49:53 -- host/discovery.sh@137 -- # get_bdev_list 00:28:40.839 01:49:53 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.839 01:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.839 01:49:53 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:40.839 01:49:53 -- common/autotest_common.sh@10 -- # set +x 00:28:40.839 01:49:53 -- host/discovery.sh@55 -- # sort 00:28:40.839 01:49:53 -- host/discovery.sh@55 -- # xargs 00:28:40.839 01:49:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.097 01:49:53 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:41.097 01:49:53 -- host/discovery.sh@138 -- # get_notification_count 00:28:41.097 01:49:53 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:41.097 01:49:53 -- host/discovery.sh@74 -- # jq '. | length' 00:28:41.098 01:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.098 01:49:53 -- common/autotest_common.sh@10 -- # set +x 00:28:41.098 01:49:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:41.098 01:49:53 -- host/discovery.sh@74 -- # notification_count=2 00:28:41.098 01:49:53 -- host/discovery.sh@75 -- # notify_id=4 00:28:41.098 01:49:53 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:41.098 01:49:53 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:41.098 01:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.098 01:49:53 -- common/autotest_common.sh@10 -- # set +x 00:28:42.036 [2024-07-23 01:49:55.003368] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:42.036 [2024-07-23 01:49:55.003395] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:42.036 [2024-07-23 01:49:55.003417] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:42.036 [2024-07-23 01:49:55.091714] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:42.295 [2024-07-23 01:49:55.197058] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:42.295 [2024-07-23 01:49:55.197102] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.295 01:49:55 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@640 -- # local es=0 00:28:42.295 01:49:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.295 01:49:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 request: 00:28:42.295 { 00:28:42.295 "name": "nvme", 00:28:42.295 "trtype": "tcp", 00:28:42.295 "traddr": "10.0.0.2", 00:28:42.295 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:42.295 "adrfam": "ipv4", 00:28:42.295 "trsvcid": "8009", 00:28:42.295 "wait_for_attach": true, 00:28:42.295 "method": "bdev_nvme_start_discovery", 00:28:42.295 "req_id": 1 00:28:42.295 } 00:28:42.295 Got JSON-RPC error response 00:28:42.295 response: 00:28:42.295 { 00:28:42.295 "code": -17, 00:28:42.295 "message": "File exists" 00:28:42.295 } 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:42.295 01:49:55 -- common/autotest_common.sh@643 -- # es=1 00:28:42.295 01:49:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:42.295 01:49:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:42.295 01:49:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:42.295 01:49:55 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # sort 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # xargs 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.295 01:49:55 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:42.295 01:49:55 -- host/discovery.sh@147 -- # get_bdev_list 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # sort 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # xargs 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.295 01:49:55 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:42.295 01:49:55 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@640 -- # local es=0 00:28:42.295 01:49:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:42.295 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.295 01:49:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 request: 00:28:42.295 { 00:28:42.295 "name": "nvme_second", 00:28:42.295 "trtype": "tcp", 00:28:42.295 "traddr": "10.0.0.2", 00:28:42.295 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:42.295 "adrfam": "ipv4", 00:28:42.295 "trsvcid": "8009", 00:28:42.295 "wait_for_attach": true, 00:28:42.295 "method": "bdev_nvme_start_discovery", 00:28:42.295 "req_id": 1 00:28:42.295 } 00:28:42.295 Got JSON-RPC error response 00:28:42.295 response: 00:28:42.295 { 00:28:42.295 "code": -17, 00:28:42.295 "message": "File exists" 00:28:42.295 } 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:42.295 01:49:55 -- common/autotest_common.sh@643 -- # es=1 00:28:42.295 01:49:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:42.295 01:49:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:42.295 01:49:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:42.295 01:49:55 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # sort 00:28:42.295 01:49:55 -- host/discovery.sh@67 -- # xargs 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.295 01:49:55 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:42.295 01:49:55 -- host/discovery.sh@153 -- # get_bdev_list 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:42.295 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:42.295 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # sort 00:28:42.295 01:49:55 -- host/discovery.sh@55 -- # xargs 00:28:42.295 01:49:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.555 01:49:55 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:42.555 01:49:55 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:42.555 01:49:55 -- common/autotest_common.sh@640 -- # local es=0 00:28:42.555 01:49:55 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:42.555 01:49:55 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:42.555 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.555 01:49:55 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:42.555 01:49:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:42.555 01:49:55 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:42.555 01:49:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.555 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:28:43.495 [2024-07-23 01:49:56.412556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.495 [2024-07-23 01:49:56.412812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:43.495 [2024-07-23 01:49:56.412843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95c0e0 with addr=10.0.0.2, port=8010 00:28:43.495 [2024-07-23 01:49:56.412887] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:43.495 [2024-07-23 01:49:56.412904] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:43.495 [2024-07-23 01:49:56.412919] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:44.433 [2024-07-23 01:49:57.414972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.433 [2024-07-23 01:49:57.415226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:44.433 [2024-07-23 01:49:57.415258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x95c0e0 with addr=10.0.0.2, port=8010 00:28:44.433 [2024-07-23 01:49:57.415288] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:44.433 [2024-07-23 01:49:57.415305] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:44.433 [2024-07-23 01:49:57.415320] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:45.368 [2024-07-23 01:49:58.417163] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:45.368 request: 00:28:45.368 { 00:28:45.368 "name": "nvme_second", 00:28:45.368 "trtype": "tcp", 00:28:45.368 "traddr": "10.0.0.2", 00:28:45.368 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:45.368 "adrfam": "ipv4", 00:28:45.368 "trsvcid": "8010", 00:28:45.368 "attach_timeout_ms": 3000, 00:28:45.368 "method": "bdev_nvme_start_discovery", 00:28:45.368 "req_id": 1 00:28:45.368 } 00:28:45.368 Got JSON-RPC error response 00:28:45.368 response: 00:28:45.368 { 00:28:45.368 "code": -110, 00:28:45.368 "message": "Connection timed out" 00:28:45.368 } 00:28:45.368 01:49:58 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:45.368 01:49:58 -- common/autotest_common.sh@643 -- # es=1 00:28:45.368 01:49:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:45.368 01:49:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:45.368 01:49:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:45.368 01:49:58 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:45.368 01:49:58 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:45.368 01:49:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.368 01:49:58 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:45.368 01:49:58 -- common/autotest_common.sh@10 -- # set +x 00:28:45.368 01:49:58 -- host/discovery.sh@67 -- # sort 00:28:45.368 01:49:58 -- host/discovery.sh@67 -- # xargs 00:28:45.368 01:49:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.368 01:49:58 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:45.368 01:49:58 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:45.368 01:49:58 -- host/discovery.sh@162 -- # kill 3889771 00:28:45.368 01:49:58 -- host/discovery.sh@163 -- # nvmftestfini 00:28:45.368 01:49:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.368 01:49:58 -- nvmf/common.sh@116 -- # sync 00:28:45.368 01:49:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.368 01:49:58 -- nvmf/common.sh@119 -- # set +e 00:28:45.368 01:49:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.368 01:49:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.629 rmmod nvme_tcp 00:28:45.629 rmmod nvme_fabrics 00:28:45.629 rmmod nvme_keyring 00:28:45.629 01:49:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:45.629 01:49:58 -- nvmf/common.sh@123 -- # set -e 00:28:45.629 01:49:58 -- nvmf/common.sh@124 -- # return 0 00:28:45.629 01:49:58 -- nvmf/common.sh@477 -- # '[' -n 3889615 ']' 00:28:45.629 01:49:58 -- nvmf/common.sh@478 -- # killprocess 3889615 00:28:45.629 01:49:58 -- common/autotest_common.sh@926 -- # '[' -z 3889615 ']' 00:28:45.629 01:49:58 -- common/autotest_common.sh@930 -- # kill -0 3889615 00:28:45.629 01:49:58 -- common/autotest_common.sh@931 -- # uname 00:28:45.629 01:49:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.629 01:49:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3889615 00:28:45.629 01:49:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:45.629 01:49:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:45.629 01:49:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3889615' 00:28:45.629 killing process with pid 3889615 00:28:45.629 01:49:58 -- common/autotest_common.sh@945 -- # kill 3889615 00:28:45.629 01:49:58 -- common/autotest_common.sh@950 -- # wait 3889615 00:28:45.889 01:49:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:45.889 01:49:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:45.889 01:49:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:45.889 01:49:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:45.889 01:49:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:45.889 01:49:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:45.889 01:49:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:45.889 01:49:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:47.798 01:50:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:47.798 00:28:47.798 real 0m17.085s 00:28:47.798 user 0m26.572s 00:28:47.798 sys 0m2.831s 00:28:47.798 01:50:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.798 01:50:00 -- common/autotest_common.sh@10 -- # set +x 00:28:47.798 ************************************ 00:28:47.798 END TEST nvmf_discovery 00:28:47.798 ************************************ 00:28:47.798 01:50:00 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:47.798 01:50:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:47.798 01:50:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.798 01:50:00 -- common/autotest_common.sh@10 -- # set +x 00:28:47.798 ************************************ 00:28:47.798 START TEST nvmf_discovery_remove_ifc 00:28:47.798 ************************************ 00:28:47.798 01:50:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:48.058 * Looking for test storage... 00:28:48.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.058 01:50:00 -- nvmf/common.sh@7 -- # uname -s 00:28:48.058 01:50:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.058 01:50:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.058 01:50:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.058 01:50:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.058 01:50:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.058 01:50:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.058 01:50:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.058 01:50:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.058 01:50:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.058 01:50:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.058 01:50:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.058 01:50:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.058 01:50:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.058 01:50:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.058 01:50:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.058 01:50:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.058 01:50:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.058 01:50:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.058 01:50:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.058 01:50:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.058 01:50:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.058 01:50:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.058 01:50:00 -- paths/export.sh@5 -- # export PATH 00:28:48.058 01:50:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.058 01:50:00 -- nvmf/common.sh@46 -- # : 0 00:28:48.058 01:50:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.058 01:50:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.058 01:50:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.058 01:50:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.058 01:50:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.058 01:50:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.058 01:50:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.058 01:50:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:48.058 01:50:00 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:48.058 01:50:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:48.058 01:50:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.058 01:50:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.058 01:50:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.058 01:50:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.058 01:50:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.058 01:50:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.058 01:50:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.058 01:50:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.058 01:50:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.058 01:50:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.058 01:50:00 -- common/autotest_common.sh@10 -- # set +x 00:28:49.961 01:50:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:49.961 01:50:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:49.961 01:50:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:49.961 01:50:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:49.961 01:50:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:49.961 01:50:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:49.961 01:50:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:49.961 01:50:02 -- nvmf/common.sh@294 -- # net_devs=() 00:28:49.961 01:50:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:49.961 01:50:02 -- nvmf/common.sh@295 -- # e810=() 00:28:49.961 01:50:02 -- nvmf/common.sh@295 -- # local -ga e810 00:28:49.961 01:50:02 -- nvmf/common.sh@296 -- # x722=() 00:28:49.961 01:50:02 -- nvmf/common.sh@296 -- # local -ga x722 00:28:49.961 01:50:02 -- nvmf/common.sh@297 -- # mlx=() 00:28:49.961 01:50:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:49.961 01:50:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:49.961 01:50:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:49.961 01:50:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:49.961 01:50:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:49.961 01:50:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:49.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:49.961 01:50:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:49.961 01:50:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:49.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:49.961 01:50:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:49.961 01:50:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.961 01:50:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.961 01:50:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:49.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:49.961 01:50:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.961 01:50:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:49.961 01:50:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:49.961 01:50:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:49.961 01:50:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:49.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:49.961 01:50:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:49.961 01:50:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:49.961 01:50:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:49.961 01:50:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:49.961 01:50:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:49.961 01:50:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:49.961 01:50:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:49.961 01:50:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:49.961 01:50:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:49.961 01:50:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:49.961 01:50:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:49.961 01:50:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:49.961 01:50:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:49.961 01:50:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:49.961 01:50:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:49.961 01:50:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:49.961 01:50:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:49.961 01:50:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:49.961 01:50:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:49.961 01:50:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:49.961 01:50:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.220 01:50:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.220 01:50:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.220 01:50:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:50.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:28:50.220 00:28:50.220 --- 10.0.0.2 ping statistics --- 00:28:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.220 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:28:50.220 01:50:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:50.220 00:28:50.220 --- 10.0.0.1 ping statistics --- 00:28:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.220 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:50.220 01:50:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.220 01:50:03 -- nvmf/common.sh@410 -- # return 0 00:28:50.220 01:50:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:50.220 01:50:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.220 01:50:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:50.220 01:50:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:50.220 01:50:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.220 01:50:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:50.220 01:50:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:50.220 01:50:03 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:50.220 01:50:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:50.220 01:50:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:50.220 01:50:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.220 01:50:03 -- nvmf/common.sh@469 -- # nvmfpid=3893254 00:28:50.220 01:50:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:50.220 01:50:03 -- nvmf/common.sh@470 -- # waitforlisten 3893254 00:28:50.220 01:50:03 -- common/autotest_common.sh@819 -- # '[' -z 3893254 ']' 00:28:50.220 01:50:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.220 01:50:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:50.220 01:50:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.220 01:50:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:50.220 01:50:03 -- common/autotest_common.sh@10 -- # set +x 00:28:50.220 [2024-07-23 01:50:03.180223] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:50.220 [2024-07-23 01:50:03.180302] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.220 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.220 [2024-07-23 01:50:03.245128] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.478 [2024-07-23 01:50:03.328761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.478 [2024-07-23 01:50:03.328915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.478 [2024-07-23 01:50:03.328934] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.478 [2024-07-23 01:50:03.328947] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.478 [2024-07-23 01:50:03.328975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.044 01:50:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.044 01:50:04 -- common/autotest_common.sh@852 -- # return 0 00:28:51.044 01:50:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:51.044 01:50:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:51.044 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:51.304 01:50:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.304 01:50:04 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:51.304 01:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.304 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:51.304 [2024-07-23 01:50:04.162997] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.304 [2024-07-23 01:50:04.171173] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:51.304 null0 00:28:51.304 [2024-07-23 01:50:04.203121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.304 01:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.304 01:50:04 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3893411 00:28:51.304 01:50:04 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:51.304 01:50:04 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3893411 /tmp/host.sock 00:28:51.304 01:50:04 -- common/autotest_common.sh@819 -- # '[' -z 3893411 ']' 00:28:51.304 01:50:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:51.304 01:50:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:51.304 01:50:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:51.304 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:51.304 01:50:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:51.304 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:51.304 [2024-07-23 01:50:04.270067] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:51.304 [2024-07-23 01:50:04.270145] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3893411 ] 00:28:51.304 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.304 [2024-07-23 01:50:04.335676] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.564 [2024-07-23 01:50:04.423501] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:51.564 [2024-07-23 01:50:04.423696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:51.564 01:50:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.564 01:50:04 -- common/autotest_common.sh@852 -- # return 0 00:28:51.564 01:50:04 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:51.564 01:50:04 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:51.564 01:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.564 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 01:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.564 01:50:04 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:51.564 01:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.564 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:51.564 01:50:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.564 01:50:04 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:51.564 01:50:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.564 01:50:04 -- common/autotest_common.sh@10 -- # set +x 00:28:52.939 [2024-07-23 01:50:05.631815] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:52.939 [2024-07-23 01:50:05.631848] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:52.939 [2024-07-23 01:50:05.631872] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:52.939 [2024-07-23 01:50:05.718180] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:52.939 [2024-07-23 01:50:05.903317] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:52.939 [2024-07-23 01:50:05.903376] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:52.939 [2024-07-23 01:50:05.903417] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:52.939 [2024-07-23 01:50:05.903448] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:52.939 [2024-07-23 01:50:05.903475] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:52.939 01:50:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.939 01:50:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.939 01:50:05 -- common/autotest_common.sh@10 -- # set +x 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.939 01:50:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.939 01:50:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.939 01:50:05 -- common/autotest_common.sh@10 -- # set +x 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.939 01:50:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.939 01:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.939 01:50:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:52.939 01:50:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.317 01:50:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.317 01:50:07 -- common/autotest_common.sh@10 -- # set +x 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.317 01:50:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.317 01:50:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.251 01:50:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.251 01:50:08 -- common/autotest_common.sh@10 -- # set +x 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.251 01:50:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:55.251 01:50:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:56.187 01:50:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:56.187 01:50:09 -- common/autotest_common.sh@10 -- # set +x 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:56.187 01:50:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.187 01:50:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.124 01:50:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.124 01:50:10 -- common/autotest_common.sh@10 -- # set +x 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.124 01:50:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:57.124 01:50:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.517 01:50:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.517 01:50:11 -- common/autotest_common.sh@10 -- # set +x 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.517 01:50:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:58.517 01:50:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:58.517 [2024-07-23 01:50:11.344872] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:58.517 [2024-07-23 01:50:11.344951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.517 [2024-07-23 01:50:11.344972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.345003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.517 [2024-07-23 01:50:11.345017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.345031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.517 [2024-07-23 01:50:11.345044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.345058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.517 [2024-07-23 01:50:11.345072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.345086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:58.517 [2024-07-23 01:50:11.345099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.345112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111850 is same with the state(5) to be set 00:28:58.517 [2024-07-23 01:50:11.354887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111850 (9): Bad file descriptor 00:28:58.517 [2024-07-23 01:50:11.364983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.517 [2024-07-23 01:50:11.365023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.517 [2024-07-23 01:50:11.365059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:64 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.517 [2024-07-23 01:50:11.365076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.518 [2024-07-23 01:50:11.365106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.518 [2024-07-23 01:50:11.365121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:58.518 [2024-07-23 01:50:11.365228] bdev_nvme.c:1582:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x214b3f0 was disconnected and freed in a reset ctrlr sequence. 00:28:58.518 [2024-07-23 01:50:11.365248] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:59.492 01:50:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.492 01:50:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.492 01:50:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.492 01:50:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.492 01:50:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.492 01:50:12 -- common/autotest_common.sh@10 -- # set +x 00:28:59.492 01:50:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.492 [2024-07-23 01:50:12.424649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:00.431 [2024-07-23 01:50:13.448705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:29:00.431 [2024-07-23 01:50:13.448799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2111850 with addr=10.0.0.2, port=4420 00:29:00.431 [2024-07-23 01:50:13.448826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2111850 is same with the state(5) to be set 00:29:00.431 [2024-07-23 01:50:13.449297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:00.431 [2024-07-23 01:50:13.449321] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:00.431 [2024-07-23 01:50:13.449335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:00.431 [2024-07-23 01:50:13.449349] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:29:00.431 [2024-07-23 01:50:13.449456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111850 (9): Bad file descriptor 00:29:00.431 [2024-07-23 01:50:13.449540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:00.431 [2024-07-23 01:50:13.449587] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:29:00.431 [2024-07-23 01:50:13.449660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.431 [2024-07-23 01:50:13.449682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.431 [2024-07-23 01:50:13.449700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.431 [2024-07-23 01:50:13.449714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.431 [2024-07-23 01:50:13.449729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.431 [2024-07-23 01:50:13.449742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.431 [2024-07-23 01:50:13.449756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.431 [2024-07-23 01:50:13.449770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.432 [2024-07-23 01:50:13.449785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.432 [2024-07-23 01:50:13.449808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.432 [2024-07-23 01:50:13.449823] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:00.432 [2024-07-23 01:50:13.449866] bdev.c:4968:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 0 00:29:00.432 [2024-07-23 01:50:13.450300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2111c60 (9): Bad file descriptor 00:29:00.432 [2024-07-23 01:50:13.451327] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:29:00.432 [2024-07-23 01:50:13.451350] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:29:00.432 01:50:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.432 01:50:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:29:00.432 01:50:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:01.365 [2024-07-23 01:50:14.452022] bdev_raid.c:3350:raid_bdev_examine_load_sb_cb: *ERROR*: Failed to examine bdev nvme0n1: Input/output error 00:29:01.365 [2024-07-23 01:50:14.452076] vbdev_gpt.c: 468:gpt_bdev_complete: *ERROR*: Gpt: bdev=nvme0n1 io error status=0 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.625 01:50:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.625 01:50:14 -- common/autotest_common.sh@10 -- # set +x 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.625 01:50:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:01.625 01:50:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:01.625 01:50:14 -- common/autotest_common.sh@10 -- # set +x 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:01.625 01:50:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:01.625 01:50:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:02.564 [2024-07-23 01:50:15.507841] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:02.564 [2024-07-23 01:50:15.507885] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:02.564 [2024-07-23 01:50:15.507924] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:02.564 01:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:02.564 01:50:15 -- common/autotest_common.sh@10 -- # set +x 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:02.564 [2024-07-23 01:50:15.594199] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:29:02.564 01:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:29:02.564 01:50:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:29:02.564 [2024-07-23 01:50:15.657219] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:29:02.564 [2024-07-23 01:50:15.657274] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:29:02.564 [2024-07-23 01:50:15.657318] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:29:02.564 [2024-07-23 01:50:15.657347] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:29:02.564 [2024-07-23 01:50:15.657364] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:02.822 [2024-07-23 01:50:15.665819] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x212f2d0 was disconnected and freed. delete nvme_qpair. 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:03.759 01:50:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:03.759 01:50:16 -- common/autotest_common.sh@10 -- # set +x 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:03.759 01:50:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:03.759 01:50:16 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3893411 00:29:03.759 01:50:16 -- common/autotest_common.sh@926 -- # '[' -z 3893411 ']' 00:29:03.759 01:50:16 -- common/autotest_common.sh@930 -- # kill -0 3893411 00:29:03.759 01:50:16 -- common/autotest_common.sh@931 -- # uname 00:29:03.759 01:50:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.759 01:50:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3893411 00:29:03.759 01:50:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:03.759 01:50:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:03.759 01:50:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3893411' 00:29:03.759 killing process with pid 3893411 00:29:03.759 01:50:16 -- common/autotest_common.sh@945 -- # kill 3893411 00:29:03.759 01:50:16 -- common/autotest_common.sh@950 -- # wait 3893411 00:29:04.017 01:50:16 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:04.017 01:50:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:04.017 01:50:16 -- nvmf/common.sh@116 -- # sync 00:29:04.017 01:50:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:04.017 01:50:16 -- nvmf/common.sh@119 -- # set +e 00:29:04.017 01:50:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:04.017 01:50:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:04.017 rmmod nvme_tcp 00:29:04.017 rmmod nvme_fabrics 00:29:04.017 rmmod nvme_keyring 00:29:04.017 01:50:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:04.017 01:50:16 -- nvmf/common.sh@123 -- # set -e 00:29:04.017 01:50:16 -- nvmf/common.sh@124 -- # return 0 00:29:04.017 01:50:16 -- nvmf/common.sh@477 -- # '[' -n 3893254 ']' 00:29:04.017 01:50:16 -- nvmf/common.sh@478 -- # killprocess 3893254 00:29:04.017 01:50:16 -- common/autotest_common.sh@926 -- # '[' -z 3893254 ']' 00:29:04.017 01:50:16 -- common/autotest_common.sh@930 -- # kill -0 3893254 00:29:04.017 01:50:16 -- common/autotest_common.sh@931 -- # uname 00:29:04.017 01:50:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:04.017 01:50:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3893254 00:29:04.017 01:50:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:04.017 01:50:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:04.017 01:50:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3893254' 00:29:04.017 killing process with pid 3893254 00:29:04.017 01:50:16 -- common/autotest_common.sh@945 -- # kill 3893254 00:29:04.017 01:50:16 -- common/autotest_common.sh@950 -- # wait 3893254 00:29:04.017 [2024-07-23 01:50:16.972770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334230 is same with the state(5) to be set 00:29:04.017 [2024-07-23 01:50:16.972812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334230 is same with the state(5) to be set 00:29:04.277 01:50:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:04.277 01:50:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:04.277 01:50:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:04.277 01:50:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.277 01:50:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:04.277 01:50:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.277 01:50:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.277 01:50:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.183 01:50:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:06.183 00:29:06.183 real 0m18.344s 00:29:06.183 user 0m25.133s 00:29:06.183 sys 0m3.253s 00:29:06.183 01:50:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:06.183 01:50:19 -- common/autotest_common.sh@10 -- # set +x 00:29:06.183 ************************************ 00:29:06.183 END TEST nvmf_discovery_remove_ifc 00:29:06.183 ************************************ 00:29:06.183 01:50:19 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:29:06.183 01:50:19 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:06.183 01:50:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:06.183 01:50:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:06.183 01:50:19 -- common/autotest_common.sh@10 -- # set +x 00:29:06.183 ************************************ 00:29:06.183 START TEST nvmf_digest 00:29:06.183 ************************************ 00:29:06.183 01:50:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:06.442 * Looking for test storage... 00:29:06.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:06.442 01:50:19 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:06.442 01:50:19 -- nvmf/common.sh@7 -- # uname -s 00:29:06.442 01:50:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.442 01:50:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.442 01:50:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.442 01:50:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.442 01:50:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.442 01:50:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.442 01:50:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.442 01:50:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.442 01:50:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.442 01:50:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.442 01:50:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.442 01:50:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:06.442 01:50:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.442 01:50:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.442 01:50:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:06.442 01:50:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:06.442 01:50:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.442 01:50:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.442 01:50:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.443 01:50:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.443 01:50:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.443 01:50:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.443 01:50:19 -- paths/export.sh@5 -- # export PATH 00:29:06.443 01:50:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.443 01:50:19 -- nvmf/common.sh@46 -- # : 0 00:29:06.443 01:50:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:06.443 01:50:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:06.443 01:50:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:06.443 01:50:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.443 01:50:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.443 01:50:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:06.443 01:50:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:06.443 01:50:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:06.443 01:50:19 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:06.443 01:50:19 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:06.443 01:50:19 -- host/digest.sh@16 -- # runtime=2 00:29:06.443 01:50:19 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:06.443 01:50:19 -- host/digest.sh@132 -- # nvmftestinit 00:29:06.443 01:50:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:06.443 01:50:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:06.443 01:50:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:06.443 01:50:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:06.443 01:50:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:06.443 01:50:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:06.443 01:50:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:06.443 01:50:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:06.443 01:50:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:06.443 01:50:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:06.443 01:50:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:06.443 01:50:19 -- common/autotest_common.sh@10 -- # set +x 00:29:08.349 01:50:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:08.349 01:50:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:08.349 01:50:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:08.349 01:50:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:08.349 01:50:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:08.349 01:50:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:08.349 01:50:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:08.349 01:50:21 -- nvmf/common.sh@294 -- # net_devs=() 00:29:08.349 01:50:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:08.349 01:50:21 -- nvmf/common.sh@295 -- # e810=() 00:29:08.349 01:50:21 -- nvmf/common.sh@295 -- # local -ga e810 00:29:08.349 01:50:21 -- nvmf/common.sh@296 -- # x722=() 00:29:08.349 01:50:21 -- nvmf/common.sh@296 -- # local -ga x722 00:29:08.349 01:50:21 -- nvmf/common.sh@297 -- # mlx=() 00:29:08.349 01:50:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:08.349 01:50:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.349 01:50:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:08.349 01:50:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:08.349 01:50:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:08.349 01:50:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.349 01:50:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:08.349 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:08.349 01:50:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:08.349 01:50:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:08.349 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:08.349 01:50:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:08.349 01:50:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:08.349 01:50:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.349 01:50:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.349 01:50:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.349 01:50:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.349 01:50:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:08.349 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:08.349 01:50:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.349 01:50:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:08.349 01:50:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.349 01:50:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:08.350 01:50:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.350 01:50:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:08.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:08.350 01:50:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.350 01:50:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:08.350 01:50:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:08.350 01:50:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:08.350 01:50:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:08.350 01:50:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:08.350 01:50:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:08.350 01:50:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.350 01:50:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:08.350 01:50:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:08.350 01:50:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:08.350 01:50:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:08.350 01:50:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:08.350 01:50:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:08.350 01:50:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.350 01:50:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:08.350 01:50:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:08.350 01:50:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:08.350 01:50:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:08.350 01:50:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:08.350 01:50:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:08.350 01:50:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:08.350 01:50:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:08.608 01:50:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:08.608 01:50:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:08.608 01:50:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:08.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:29:08.608 00:29:08.608 --- 10.0.0.2 ping statistics --- 00:29:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.608 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:08.608 01:50:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:08.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:29:08.608 00:29:08.608 --- 10.0.0.1 ping statistics --- 00:29:08.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.608 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:29:08.608 01:50:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.608 01:50:21 -- nvmf/common.sh@410 -- # return 0 00:29:08.608 01:50:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:08.608 01:50:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.608 01:50:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:08.608 01:50:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:08.608 01:50:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.608 01:50:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:08.608 01:50:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:08.608 01:50:21 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:08.608 01:50:21 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:08.608 01:50:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:08.608 01:50:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:08.608 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.608 ************************************ 00:29:08.608 START TEST nvmf_digest_clean 00:29:08.608 ************************************ 00:29:08.608 01:50:21 -- common/autotest_common.sh@1104 -- # run_digest 00:29:08.608 01:50:21 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:08.608 01:50:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:08.609 01:50:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:08.609 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.609 01:50:21 -- nvmf/common.sh@469 -- # nvmfpid=3896929 00:29:08.609 01:50:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:08.609 01:50:21 -- nvmf/common.sh@470 -- # waitforlisten 3896929 00:29:08.609 01:50:21 -- common/autotest_common.sh@819 -- # '[' -z 3896929 ']' 00:29:08.609 01:50:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.609 01:50:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:08.609 01:50:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.609 01:50:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:08.609 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.609 [2024-07-23 01:50:21.548994] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:08.609 [2024-07-23 01:50:21.549092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.609 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.609 [2024-07-23 01:50:21.618335] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.867 [2024-07-23 01:50:21.709839] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:08.867 [2024-07-23 01:50:21.710040] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.867 [2024-07-23 01:50:21.710068] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.867 [2024-07-23 01:50:21.710080] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.867 [2024-07-23 01:50:21.710109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.867 01:50:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:08.867 01:50:21 -- common/autotest_common.sh@852 -- # return 0 00:29:08.867 01:50:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:08.867 01:50:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:08.867 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.867 01:50:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.867 01:50:21 -- host/digest.sh@120 -- # common_target_config 00:29:08.867 01:50:21 -- host/digest.sh@43 -- # rpc_cmd 00:29:08.867 01:50:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:08.867 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.867 null0 00:29:08.867 [2024-07-23 01:50:21.889118] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.867 [2024-07-23 01:50:21.913330] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.867 01:50:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:08.867 01:50:21 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:08.867 01:50:21 -- host/digest.sh@77 -- # local rw bs qd 00:29:08.867 01:50:21 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.867 01:50:21 -- host/digest.sh@80 -- # rw=randread 00:29:08.867 01:50:21 -- host/digest.sh@80 -- # bs=4096 00:29:08.867 01:50:21 -- host/digest.sh@80 -- # qd=128 00:29:08.867 01:50:21 -- host/digest.sh@82 -- # bperfpid=3897075 00:29:08.867 01:50:21 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:08.867 01:50:21 -- host/digest.sh@83 -- # waitforlisten 3897075 /var/tmp/bperf.sock 00:29:08.867 01:50:21 -- common/autotest_common.sh@819 -- # '[' -z 3897075 ']' 00:29:08.867 01:50:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.867 01:50:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:08.867 01:50:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.867 01:50:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:08.867 01:50:21 -- common/autotest_common.sh@10 -- # set +x 00:29:08.867 [2024-07-23 01:50:21.957957] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:08.867 [2024-07-23 01:50:21.958017] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897075 ] 00:29:09.126 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.126 [2024-07-23 01:50:22.017500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.126 [2024-07-23 01:50:22.102294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.126 01:50:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:09.126 01:50:22 -- common/autotest_common.sh@852 -- # return 0 00:29:09.126 01:50:22 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:09.126 01:50:22 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:09.126 01:50:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.695 01:50:22 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.695 01:50:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.953 nvme0n1 00:29:09.953 01:50:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:09.953 01:50:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.212 Running I/O for 2 seconds... 00:29:12.113 00:29:12.113 Latency(us) 00:29:12.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.113 nvme0n1 : 2.04 16437.79 64.21 0.00 0.00 7627.69 2548.62 47185.92 00:29:12.113 =================================================================================================================== 00:29:12.113 Total : 16437.79 64.21 0.00 0.00 7627.69 2548.62 47185.92 00:29:12.113 0 00:29:12.113 01:50:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:12.113 01:50:25 -- host/digest.sh@92 -- # get_accel_stats 00:29:12.113 01:50:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.113 01:50:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.113 | select(.opcode=="crc32c") 00:29:12.113 | "\(.module_name) \(.executed)"' 00:29:12.113 01:50:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.372 01:50:25 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:12.372 01:50:25 -- host/digest.sh@93 -- # exp_module=software 00:29:12.372 01:50:25 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:12.372 01:50:25 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.372 01:50:25 -- host/digest.sh@97 -- # killprocess 3897075 00:29:12.372 01:50:25 -- common/autotest_common.sh@926 -- # '[' -z 3897075 ']' 00:29:12.372 01:50:25 -- common/autotest_common.sh@930 -- # kill -0 3897075 00:29:12.372 01:50:25 -- common/autotest_common.sh@931 -- # uname 00:29:12.372 01:50:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:12.372 01:50:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897075 00:29:12.372 01:50:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:12.372 01:50:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:12.372 01:50:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897075' 00:29:12.372 killing process with pid 3897075 00:29:12.372 01:50:25 -- common/autotest_common.sh@945 -- # kill 3897075 00:29:12.372 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.372 00:29:12.372 Latency(us) 00:29:12.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.372 =================================================================================================================== 00:29:12.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.372 01:50:25 -- common/autotest_common.sh@950 -- # wait 3897075 00:29:12.631 01:50:25 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:12.631 01:50:25 -- host/digest.sh@77 -- # local rw bs qd 00:29:12.631 01:50:25 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.631 01:50:25 -- host/digest.sh@80 -- # rw=randread 00:29:12.631 01:50:25 -- host/digest.sh@80 -- # bs=131072 00:29:12.631 01:50:25 -- host/digest.sh@80 -- # qd=16 00:29:12.631 01:50:25 -- host/digest.sh@82 -- # bperfpid=3897494 00:29:12.631 01:50:25 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:12.631 01:50:25 -- host/digest.sh@83 -- # waitforlisten 3897494 /var/tmp/bperf.sock 00:29:12.631 01:50:25 -- common/autotest_common.sh@819 -- # '[' -z 3897494 ']' 00:29:12.631 01:50:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.631 01:50:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.631 01:50:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.631 01:50:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.631 01:50:25 -- common/autotest_common.sh@10 -- # set +x 00:29:12.631 [2024-07-23 01:50:25.714527] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.631 [2024-07-23 01:50:25.714626] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897494 ] 00:29:12.631 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.631 Zero copy mechanism will not be used. 00:29:12.890 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.890 [2024-07-23 01:50:25.783764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.890 [2024-07-23 01:50:25.872602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.890 01:50:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:12.890 01:50:25 -- common/autotest_common.sh@852 -- # return 0 00:29:12.890 01:50:25 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:12.890 01:50:25 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:12.890 01:50:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.457 01:50:26 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.457 01:50:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.457 nvme0n1 00:29:13.715 01:50:26 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:13.715 01:50:26 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.715 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.715 Zero copy mechanism will not be used. 00:29:13.715 Running I/O for 2 seconds... 00:29:15.616 00:29:15.616 Latency(us) 00:29:15.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.616 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:15.616 nvme0n1 : 2.00 2270.88 283.86 0.00 0.00 7041.66 6650.69 17767.54 00:29:15.616 =================================================================================================================== 00:29:15.616 Total : 2270.88 283.86 0.00 0.00 7041.66 6650.69 17767.54 00:29:15.616 0 00:29:15.616 01:50:28 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:15.616 01:50:28 -- host/digest.sh@92 -- # get_accel_stats 00:29:15.616 01:50:28 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.616 01:50:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.616 01:50:28 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.616 | select(.opcode=="crc32c") 00:29:15.616 | "\(.module_name) \(.executed)"' 00:29:15.875 01:50:28 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:15.875 01:50:28 -- host/digest.sh@93 -- # exp_module=software 00:29:15.875 01:50:28 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:15.875 01:50:28 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.875 01:50:28 -- host/digest.sh@97 -- # killprocess 3897494 00:29:15.875 01:50:28 -- common/autotest_common.sh@926 -- # '[' -z 3897494 ']' 00:29:15.875 01:50:28 -- common/autotest_common.sh@930 -- # kill -0 3897494 00:29:15.875 01:50:28 -- common/autotest_common.sh@931 -- # uname 00:29:15.875 01:50:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:15.875 01:50:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897494 00:29:15.875 01:50:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:15.875 01:50:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:15.875 01:50:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897494' 00:29:15.875 killing process with pid 3897494 00:29:15.875 01:50:28 -- common/autotest_common.sh@945 -- # kill 3897494 00:29:15.875 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.875 00:29:15.875 Latency(us) 00:29:15.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.875 =================================================================================================================== 00:29:15.875 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.875 01:50:28 -- common/autotest_common.sh@950 -- # wait 3897494 00:29:16.134 01:50:29 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:16.134 01:50:29 -- host/digest.sh@77 -- # local rw bs qd 00:29:16.134 01:50:29 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:16.134 01:50:29 -- host/digest.sh@80 -- # rw=randwrite 00:29:16.134 01:50:29 -- host/digest.sh@80 -- # bs=4096 00:29:16.134 01:50:29 -- host/digest.sh@80 -- # qd=128 00:29:16.134 01:50:29 -- host/digest.sh@82 -- # bperfpid=3897915 00:29:16.134 01:50:29 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:16.134 01:50:29 -- host/digest.sh@83 -- # waitforlisten 3897915 /var/tmp/bperf.sock 00:29:16.134 01:50:29 -- common/autotest_common.sh@819 -- # '[' -z 3897915 ']' 00:29:16.134 01:50:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.134 01:50:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:16.134 01:50:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.134 01:50:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:16.134 01:50:29 -- common/autotest_common.sh@10 -- # set +x 00:29:16.134 [2024-07-23 01:50:29.205064] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:16.134 [2024-07-23 01:50:29.205158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3897915 ] 00:29:16.392 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.392 [2024-07-23 01:50:29.264697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.392 [2024-07-23 01:50:29.345498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.392 01:50:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:16.392 01:50:29 -- common/autotest_common.sh@852 -- # return 0 00:29:16.392 01:50:29 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:16.392 01:50:29 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:16.392 01:50:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:16.650 01:50:29 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.650 01:50:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.220 nvme0n1 00:29:17.220 01:50:30 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:17.220 01:50:30 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.220 Running I/O for 2 seconds... 00:29:19.159 00:29:19.159 Latency(us) 00:29:19.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.159 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:19.159 nvme0n1 : 2.01 20179.34 78.83 0.00 0.00 6329.41 3009.80 15534.46 00:29:19.159 =================================================================================================================== 00:29:19.159 Total : 20179.34 78.83 0.00 0.00 6329.41 3009.80 15534.46 00:29:19.417 0 00:29:19.417 01:50:32 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:19.417 01:50:32 -- host/digest.sh@92 -- # get_accel_stats 00:29:19.417 01:50:32 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:19.417 01:50:32 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:19.417 | select(.opcode=="crc32c") 00:29:19.417 | "\(.module_name) \(.executed)"' 00:29:19.417 01:50:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:19.417 01:50:32 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:19.417 01:50:32 -- host/digest.sh@93 -- # exp_module=software 00:29:19.417 01:50:32 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:19.417 01:50:32 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:19.417 01:50:32 -- host/digest.sh@97 -- # killprocess 3897915 00:29:19.417 01:50:32 -- common/autotest_common.sh@926 -- # '[' -z 3897915 ']' 00:29:19.417 01:50:32 -- common/autotest_common.sh@930 -- # kill -0 3897915 00:29:19.417 01:50:32 -- common/autotest_common.sh@931 -- # uname 00:29:19.417 01:50:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.417 01:50:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3897915 00:29:19.677 01:50:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:19.677 01:50:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:19.677 01:50:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3897915' 00:29:19.677 killing process with pid 3897915 00:29:19.677 01:50:32 -- common/autotest_common.sh@945 -- # kill 3897915 00:29:19.677 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.677 00:29:19.677 Latency(us) 00:29:19.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.677 =================================================================================================================== 00:29:19.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.677 01:50:32 -- common/autotest_common.sh@950 -- # wait 3897915 00:29:19.677 01:50:32 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:19.677 01:50:32 -- host/digest.sh@77 -- # local rw bs qd 00:29:19.677 01:50:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:19.677 01:50:32 -- host/digest.sh@80 -- # rw=randwrite 00:29:19.677 01:50:32 -- host/digest.sh@80 -- # bs=131072 00:29:19.677 01:50:32 -- host/digest.sh@80 -- # qd=16 00:29:19.677 01:50:32 -- host/digest.sh@82 -- # bperfpid=3898344 00:29:19.677 01:50:32 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:19.677 01:50:32 -- host/digest.sh@83 -- # waitforlisten 3898344 /var/tmp/bperf.sock 00:29:19.677 01:50:32 -- common/autotest_common.sh@819 -- # '[' -z 3898344 ']' 00:29:19.677 01:50:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:19.677 01:50:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:19.677 01:50:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:19.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:19.677 01:50:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:19.677 01:50:32 -- common/autotest_common.sh@10 -- # set +x 00:29:19.935 [2024-07-23 01:50:32.804344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:19.935 [2024-07-23 01:50:32.804420] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898344 ] 00:29:19.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:19.935 Zero copy mechanism will not be used. 00:29:19.935 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.935 [2024-07-23 01:50:32.867770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.935 [2024-07-23 01:50:32.954381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.935 01:50:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:19.935 01:50:33 -- common/autotest_common.sh@852 -- # return 0 00:29:19.935 01:50:33 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:19.935 01:50:33 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:19.935 01:50:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:20.502 01:50:33 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.502 01:50:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.760 nvme0n1 00:29:20.760 01:50:33 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:20.760 01:50:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.760 Zero copy mechanism will not be used. 00:29:20.760 Running I/O for 2 seconds... 00:29:23.295 00:29:23.295 Latency(us) 00:29:23.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.295 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:23.295 nvme0n1 : 2.01 2001.02 250.13 0.00 0.00 7975.77 2779.21 11311.03 00:29:23.295 =================================================================================================================== 00:29:23.295 Total : 2001.02 250.13 0.00 0.00 7975.77 2779.21 11311.03 00:29:23.295 0 00:29:23.295 01:50:35 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:23.295 01:50:35 -- host/digest.sh@92 -- # get_accel_stats 00:29:23.296 01:50:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:23.296 01:50:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:23.296 01:50:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:23.296 | select(.opcode=="crc32c") 00:29:23.296 | "\(.module_name) \(.executed)"' 00:29:23.296 01:50:36 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:23.296 01:50:36 -- host/digest.sh@93 -- # exp_module=software 00:29:23.296 01:50:36 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:23.296 01:50:36 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:23.296 01:50:36 -- host/digest.sh@97 -- # killprocess 3898344 00:29:23.296 01:50:36 -- common/autotest_common.sh@926 -- # '[' -z 3898344 ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@930 -- # kill -0 3898344 00:29:23.296 01:50:36 -- common/autotest_common.sh@931 -- # uname 00:29:23.296 01:50:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3898344 00:29:23.296 01:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:23.296 01:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3898344' 00:29:23.296 killing process with pid 3898344 00:29:23.296 01:50:36 -- common/autotest_common.sh@945 -- # kill 3898344 00:29:23.296 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.296 00:29:23.296 Latency(us) 00:29:23.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.296 =================================================================================================================== 00:29:23.296 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.296 01:50:36 -- common/autotest_common.sh@950 -- # wait 3898344 00:29:23.296 01:50:36 -- host/digest.sh@126 -- # killprocess 3896929 00:29:23.296 01:50:36 -- common/autotest_common.sh@926 -- # '[' -z 3896929 ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@930 -- # kill -0 3896929 00:29:23.296 01:50:36 -- common/autotest_common.sh@931 -- # uname 00:29:23.296 01:50:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3896929 00:29:23.296 01:50:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:23.296 01:50:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:23.296 01:50:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3896929' 00:29:23.296 killing process with pid 3896929 00:29:23.296 01:50:36 -- common/autotest_common.sh@945 -- # kill 3896929 00:29:23.296 01:50:36 -- common/autotest_common.sh@950 -- # wait 3896929 00:29:23.554 00:29:23.554 real 0m15.025s 00:29:23.554 user 0m29.853s 00:29:23.554 sys 0m4.131s 00:29:23.554 01:50:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.554 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 ************************************ 00:29:23.554 END TEST nvmf_digest_clean 00:29:23.554 ************************************ 00:29:23.554 01:50:36 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:23.554 01:50:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:23.554 01:50:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:23.554 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 ************************************ 00:29:23.554 START TEST nvmf_digest_error 00:29:23.554 ************************************ 00:29:23.554 01:50:36 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:23.554 01:50:36 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:23.554 01:50:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:23.554 01:50:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:23.554 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 01:50:36 -- nvmf/common.sh@469 -- # nvmfpid=3898902 00:29:23.554 01:50:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:23.554 01:50:36 -- nvmf/common.sh@470 -- # waitforlisten 3898902 00:29:23.554 01:50:36 -- common/autotest_common.sh@819 -- # '[' -z 3898902 ']' 00:29:23.554 01:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.554 01:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:23.554 01:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.554 01:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:23.554 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.554 [2024-07-23 01:50:36.607247] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:23.554 [2024-07-23 01:50:36.607349] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.554 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.814 [2024-07-23 01:50:36.675886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:23.814 [2024-07-23 01:50:36.760550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:23.814 [2024-07-23 01:50:36.760732] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.814 [2024-07-23 01:50:36.760755] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.814 [2024-07-23 01:50:36.760770] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.814 [2024-07-23 01:50:36.760802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.814 01:50:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:23.814 01:50:36 -- common/autotest_common.sh@852 -- # return 0 00:29:23.814 01:50:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:23.814 01:50:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:23.814 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.814 01:50:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:23.814 01:50:36 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:23.814 01:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.814 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:23.814 [2024-07-23 01:50:36.825361] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:23.814 01:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:23.814 01:50:36 -- host/digest.sh@104 -- # common_target_config 00:29:23.814 01:50:36 -- host/digest.sh@43 -- # rpc_cmd 00:29:23.814 01:50:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:23.814 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:24.073 null0 00:29:24.073 [2024-07-23 01:50:36.945231] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.073 [2024-07-23 01:50:36.969410] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:24.073 01:50:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:24.073 01:50:36 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:24.073 01:50:36 -- host/digest.sh@54 -- # local rw bs qd 00:29:24.073 01:50:36 -- host/digest.sh@56 -- # rw=randread 00:29:24.073 01:50:36 -- host/digest.sh@56 -- # bs=4096 00:29:24.073 01:50:36 -- host/digest.sh@56 -- # qd=128 00:29:24.073 01:50:36 -- host/digest.sh@58 -- # bperfpid=3898929 00:29:24.073 01:50:36 -- host/digest.sh@60 -- # waitforlisten 3898929 /var/tmp/bperf.sock 00:29:24.073 01:50:36 -- common/autotest_common.sh@819 -- # '[' -z 3898929 ']' 00:29:24.073 01:50:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.073 01:50:36 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:24.073 01:50:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.073 01:50:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.073 01:50:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.073 01:50:36 -- common/autotest_common.sh@10 -- # set +x 00:29:24.073 [2024-07-23 01:50:37.015152] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:24.073 [2024-07-23 01:50:37.015226] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898929 ] 00:29:24.073 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.073 [2024-07-23 01:50:37.072799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.073 [2024-07-23 01:50:37.156376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.009 01:50:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.009 01:50:37 -- common/autotest_common.sh@852 -- # return 0 00:29:25.009 01:50:37 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.009 01:50:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.267 01:50:38 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.267 01:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.267 01:50:38 -- common/autotest_common.sh@10 -- # set +x 00:29:25.267 01:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.267 01:50:38 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.267 01:50:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.526 nvme0n1 00:29:25.786 01:50:38 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:25.786 01:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:25.786 01:50:38 -- common/autotest_common.sh@10 -- # set +x 00:29:25.786 01:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:25.786 01:50:38 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:25.786 01:50:38 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:25.786 Running I/O for 2 seconds... 00:29:25.786 [2024-07-23 01:50:38.759137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.759195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.759217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.775506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.775543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.775564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.792491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.792528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.792560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.807769] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.807800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:14905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.807817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.822793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.822822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.822838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.838389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.838426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.838446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.854921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.854977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.854998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:25.786 [2024-07-23 01:50:38.869903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:25.786 [2024-07-23 01:50:38.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:18175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:25.786 [2024-07-23 01:50:38.869951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.885788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.885821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.885838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.902765] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.902796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.902814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.913915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.913962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.913982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.930229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.930268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.930285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.947529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.947565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.947585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.964511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.964547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.964566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.981388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.981423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.981443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:38.997843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:38.997872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:38.997887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.014181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.014216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.014235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.030905] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.030934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.030965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.046874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.046905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.046922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.057607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.057648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.057682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.073681] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.073711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.073743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.089504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.089539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.089558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.106674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.106702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:24521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.106718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.123725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.123754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.123770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.046 [2024-07-23 01:50:39.141010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.046 [2024-07-23 01:50:39.141044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.046 [2024-07-23 01:50:39.141064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.157391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.157426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.157445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.174162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.174196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.174215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.190687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.190716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.190732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.206592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.206634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.206674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.222564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.222598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.222624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.239158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.239197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.239217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.254387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.254421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.254440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.270834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.270864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.281120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.281149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.281183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.298585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.298627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.298664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.315463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.315497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.315516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.330790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.330819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.330835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.305 [2024-07-23 01:50:39.348059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.305 [2024-07-23 01:50:39.348093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.305 [2024-07-23 01:50:39.348112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.306 [2024-07-23 01:50:39.365631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.306 [2024-07-23 01:50:39.365678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.306 [2024-07-23 01:50:39.365696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.306 [2024-07-23 01:50:39.381823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.306 [2024-07-23 01:50:39.381851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.306 [2024-07-23 01:50:39.381866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.306 [2024-07-23 01:50:39.398725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.306 [2024-07-23 01:50:39.398753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.306 [2024-07-23 01:50:39.398790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.415952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.415987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.416006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.431515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.431554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.431575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.443590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.443630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.443665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.459302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.459336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.459355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.476821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.476851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:21818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.476873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.494273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.494308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.494328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.509990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.510021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.510043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.526789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.526835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.543946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.543977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.544010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.561505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.561541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.561562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.578736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.578767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.578785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.596250] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.596285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.596304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.613414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.613468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.625203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.625242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.625262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.641897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.641946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.641962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.564 [2024-07-23 01:50:39.659314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.564 [2024-07-23 01:50:39.659349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.564 [2024-07-23 01:50:39.659368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.823 [2024-07-23 01:50:39.676131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.823 [2024-07-23 01:50:39.676165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.823 [2024-07-23 01:50:39.676189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.693233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.693268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.693293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.709189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.709224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.709243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.726593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.726668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.743261] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.743295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.743314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.760939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.760968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.760998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.777570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.777624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.777660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.793257] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.793288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.793307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.804090] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.804154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.820248] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.820283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.820302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.837083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.837119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.837138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.854467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.854501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.854521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.872002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.872031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.872064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.889446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.889481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.889500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.824 [2024-07-23 01:50:39.907020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:26.824 [2024-07-23 01:50:39.907050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.824 [2024-07-23 01:50:39.907077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:39.924066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:39.924112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:39.924132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:39.939190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:39.939225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:39.939245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:39.956089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:39.956123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:39.956142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:39.972279] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:39.972313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:39.972333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:39.988925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:39.988969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:39.988986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.005697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.005730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.005755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.018055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.018099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.018121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.034252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.034289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.034309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.051746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.051788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.051806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.068747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.068778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.068795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.085199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.085236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.085256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.101907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.085 [2024-07-23 01:50:40.101952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.085 [2024-07-23 01:50:40.101973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.085 [2024-07-23 01:50:40.118477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.086 [2024-07-23 01:50:40.118513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.086 [2024-07-23 01:50:40.118533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.086 [2024-07-23 01:50:40.134423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.086 [2024-07-23 01:50:40.134463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.086 [2024-07-23 01:50:40.134483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.086 [2024-07-23 01:50:40.150712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.086 [2024-07-23 01:50:40.150743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.086 [2024-07-23 01:50:40.150761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.086 [2024-07-23 01:50:40.168166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.086 [2024-07-23 01:50:40.168201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.086 [2024-07-23 01:50:40.168220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.184752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.184811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.196219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.196254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.196274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.217532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.217568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.217588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.228902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.228932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.228966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.246747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.246779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.246797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.263037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.263073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.263093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.279980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.280016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.280037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.296780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.296816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.346 [2024-07-23 01:50:40.296833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.346 [2024-07-23 01:50:40.314053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.346 [2024-07-23 01:50:40.314089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.314109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.328601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.328666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.328685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.340598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.340661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.358178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.358214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.358233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.374674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.374705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.374722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.391950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.391998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.392018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.409213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.409249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.409269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.426863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.426899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.426919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.347 [2024-07-23 01:50:40.443907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.347 [2024-07-23 01:50:40.443959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.347 [2024-07-23 01:50:40.443979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.460084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.460120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.460140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.470768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.470813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.487517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.487555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.487575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.503219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.503255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.503275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.520382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.520418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.520438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.536485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.536521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.536541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.553857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.553887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.553903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.570680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.570709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.570726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.586954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.586989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.587007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.603779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.603810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.621239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.621275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.621295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.632253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.632288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.632307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.648475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.648512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.648532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.665377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.665413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.665432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.681969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.682025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.607 [2024-07-23 01:50:40.698846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.607 [2024-07-23 01:50:40.698878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.607 [2024-07-23 01:50:40.698912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.865 [2024-07-23 01:50:40.715545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.865 [2024-07-23 01:50:40.715582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.865 [2024-07-23 01:50:40.715601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.865 [2024-07-23 01:50:40.732033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.865 [2024-07-23 01:50:40.732070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.865 [2024-07-23 01:50:40.732091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.865 [2024-07-23 01:50:40.748832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18a8f10) 00:29:27.865 [2024-07-23 01:50:40.748868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.865 [2024-07-23 01:50:40.748901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.865 00:29:27.865 Latency(us) 00:29:27.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.865 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:27.865 nvme0n1 : 2.01 15806.13 61.74 0.00 0.00 8089.36 2657.85 22330.79 00:29:27.865 =================================================================================================================== 00:29:27.865 Total : 15806.13 61.74 0.00 0.00 8089.36 2657.85 22330.79 00:29:27.865 0 00:29:27.865 01:50:40 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:27.865 01:50:40 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:27.865 01:50:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:27.865 01:50:40 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:27.865 | .driver_specific 00:29:27.865 | .nvme_error 00:29:27.865 | .status_code 00:29:27.866 | .command_transient_transport_error' 00:29:28.124 01:50:41 -- host/digest.sh@71 -- # (( 124 > 0 )) 00:29:28.124 01:50:41 -- host/digest.sh@73 -- # killprocess 3898929 00:29:28.124 01:50:41 -- common/autotest_common.sh@926 -- # '[' -z 3898929 ']' 00:29:28.124 01:50:41 -- common/autotest_common.sh@930 -- # kill -0 3898929 00:29:28.124 01:50:41 -- common/autotest_common.sh@931 -- # uname 00:29:28.124 01:50:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.124 01:50:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3898929 00:29:28.124 01:50:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:28.124 01:50:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:28.124 01:50:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3898929' 00:29:28.124 killing process with pid 3898929 00:29:28.124 01:50:41 -- common/autotest_common.sh@945 -- # kill 3898929 00:29:28.124 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.124 00:29:28.124 Latency(us) 00:29:28.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.124 =================================================================================================================== 00:29:28.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.124 01:50:41 -- common/autotest_common.sh@950 -- # wait 3898929 00:29:28.381 01:50:41 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:28.381 01:50:41 -- host/digest.sh@54 -- # local rw bs qd 00:29:28.381 01:50:41 -- host/digest.sh@56 -- # rw=randread 00:29:28.381 01:50:41 -- host/digest.sh@56 -- # bs=131072 00:29:28.381 01:50:41 -- host/digest.sh@56 -- # qd=16 00:29:28.381 01:50:41 -- host/digest.sh@58 -- # bperfpid=3899480 00:29:28.381 01:50:41 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:28.381 01:50:41 -- host/digest.sh@60 -- # waitforlisten 3899480 /var/tmp/bperf.sock 00:29:28.381 01:50:41 -- common/autotest_common.sh@819 -- # '[' -z 3899480 ']' 00:29:28.381 01:50:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.381 01:50:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.381 01:50:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.381 01:50:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.381 01:50:41 -- common/autotest_common.sh@10 -- # set +x 00:29:28.381 [2024-07-23 01:50:41.328164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:28.381 [2024-07-23 01:50:41.328261] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899480 ] 00:29:28.381 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.381 Zero copy mechanism will not be used. 00:29:28.381 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.381 [2024-07-23 01:50:41.386988] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.381 [2024-07-23 01:50:41.469920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:29.316 01:50:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:29.316 01:50:42 -- common/autotest_common.sh@852 -- # return 0 00:29:29.316 01:50:42 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.316 01:50:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:29.596 01:50:42 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:29.596 01:50:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:29.596 01:50:42 -- common/autotest_common.sh@10 -- # set +x 00:29:29.596 01:50:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:29.596 01:50:42 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:29.596 01:50:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.162 nvme0n1 00:29:30.162 01:50:43 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:30.162 01:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.162 01:50:43 -- common/autotest_common.sh@10 -- # set +x 00:29:30.162 01:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.162 01:50:43 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.162 01:50:43 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.162 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.162 Zero copy mechanism will not be used. 00:29:30.162 Running I/O for 2 seconds... 00:29:30.162 [2024-07-23 01:50:43.237228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.162 [2024-07-23 01:50:43.237299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.162 [2024-07-23 01:50:43.237319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.162 [2024-07-23 01:50:43.247839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.162 [2024-07-23 01:50:43.247894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.162 [2024-07-23 01:50:43.247913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.162 [2024-07-23 01:50:43.258948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.162 [2024-07-23 01:50:43.258996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.162 [2024-07-23 01:50:43.259014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.269158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.269189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.269207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.279666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.279698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.279716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.290056] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.290089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.290106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.300349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.300381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.300399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.310575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.310629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.310656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.320970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.321001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.321019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.331699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.331731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.331749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.342791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.342823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.342841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.354944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.354977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.354995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.366180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.366232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.420 [2024-07-23 01:50:43.366251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.420 [2024-07-23 01:50:43.377331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.420 [2024-07-23 01:50:43.377363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.377380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.388225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.388257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.388274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.398447] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.398478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.398496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.409326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.409357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.409374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.420521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.420553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.420571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.431088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.431119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.431137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.441738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.441770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.441788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.451872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.451903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.451937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.462540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.462572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.462589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.472811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.472842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.472860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.482925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.482972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.482989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.494156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.494187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.494204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.504836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.504869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.504887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.421 [2024-07-23 01:50:43.515208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.421 [2024-07-23 01:50:43.515240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.421 [2024-07-23 01:50:43.515273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.525813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.525846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.525864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.536035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.536098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.546853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.546885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.546925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.558004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.558035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.558053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.568284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.568316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.568349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.579980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.580030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.590328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.590358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.590375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.601358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.601390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.601408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.612346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.612392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.612410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.622734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.622766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.622784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.634504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.634536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.634554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.644814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.644853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.644872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.655706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.655737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.655755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.665906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.665953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.665970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.676156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.676186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.676203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.687190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.687237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.687255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.697757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.697788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.697807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.708059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.708091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.708109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.718700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.718732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.718750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.729115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.729145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.729163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.740082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.740114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.740132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.750580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.750611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.681 [2024-07-23 01:50:43.750659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.681 [2024-07-23 01:50:43.761308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.681 [2024-07-23 01:50:43.761354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.682 [2024-07-23 01:50:43.761372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.682 [2024-07-23 01:50:43.772434] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.682 [2024-07-23 01:50:43.772480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.682 [2024-07-23 01:50:43.772497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.782682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.782714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.782733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.792963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.792994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.793012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.803441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.803486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.803504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.814043] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.814089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.814107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.825027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.825066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.825084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.835506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.835552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.835569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.846129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.846175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.846193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.856520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.856552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.856570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.866845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.866876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.866894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.877900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.877931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.877966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.889096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.942 [2024-07-23 01:50:43.889127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.942 [2024-07-23 01:50:43.889144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.942 [2024-07-23 01:50:43.900064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.900099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.900119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.912084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.912119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.912139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.924398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.924432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.924452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.935829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.935859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.935876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.947095] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.947129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.947148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.958521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.958554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.958573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.969773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.969805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.969822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.980970] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.981004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.981024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:43.992254] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:43.992288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:43.992307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:44.003463] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:44.003497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:44.003516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:44.015097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:44.015132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:44.015158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:44.027019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:44.027053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:44.027072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:30.943 [2024-07-23 01:50:44.038175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:30.943 [2024-07-23 01:50:44.038210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.943 [2024-07-23 01:50:44.038230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.049368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.049403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.049423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.060600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.060658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.060677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.071764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.071808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.071826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.083122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.083161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.083181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.094294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.094328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.094348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.105557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.105591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.105610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.117012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.117053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.117074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.128528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.128562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.128582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.139838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.139867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.139884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.151730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.151769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.151786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.162839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.162883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.162900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.174098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.174133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.174154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.185244] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.185280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.185300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.196601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.196663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.196683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.207915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.207945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.207982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.219129] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.219163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.219183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.231068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.231103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.231123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.242323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.242358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.242377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.253587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.202 [2024-07-23 01:50:44.253631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.202 [2024-07-23 01:50:44.253653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.202 [2024-07-23 01:50:44.265112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.203 [2024-07-23 01:50:44.265147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.203 [2024-07-23 01:50:44.265167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.203 [2024-07-23 01:50:44.276429] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.203 [2024-07-23 01:50:44.276463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.203 [2024-07-23 01:50:44.276484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.203 [2024-07-23 01:50:44.287658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.203 [2024-07-23 01:50:44.287689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.203 [2024-07-23 01:50:44.287706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.203 [2024-07-23 01:50:44.299131] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.203 [2024-07-23 01:50:44.299167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.203 [2024-07-23 01:50:44.299187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.310349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.310401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.310421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.321873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.321922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.321939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.332803] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.332834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.332851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.345297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.345333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.356560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.356595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.356625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.367891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.367923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.367957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.379173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.379207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.379227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.390272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.390306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.390326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.401575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.401608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.401641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.412886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.412931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.412948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.424235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.424269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.435601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.435644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.435665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.446838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.446900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.458702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.458748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.463 [2024-07-23 01:50:44.458765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.463 [2024-07-23 01:50:44.469809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.463 [2024-07-23 01:50:44.469848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.469865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.481107] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.481142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.481161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.492256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.492291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.503757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.503787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.503809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.515178] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.515213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.515232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.526423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.526457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.526477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.537662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.537692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.537709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.548762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.548807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.548824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.464 [2024-07-23 01:50:44.560094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.464 [2024-07-23 01:50:44.560129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.464 [2024-07-23 01:50:44.560149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.571340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.571376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.571395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.582808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.582838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.582856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.594085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.594119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.594139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.605164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.605203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.605224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.616237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.616271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.616291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.627545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.627579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.639425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.639459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.639479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.650675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.650721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.650738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.661875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.661906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.661923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.673101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.673135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.673155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.684235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.684269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.684289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.696213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.696249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.696279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.707480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.707515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.707535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.718820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.718850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.718867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.730077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.730112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.730132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.741575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.741609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.741647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.752725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.752773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.764106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.764139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.764159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.775565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.775599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.775627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.786835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.786866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.786884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.724 [2024-07-23 01:50:44.798190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.724 [2024-07-23 01:50:44.798229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.724 [2024-07-23 01:50:44.798250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.725 [2024-07-23 01:50:44.810003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.725 [2024-07-23 01:50:44.810038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.725 [2024-07-23 01:50:44.810058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.725 [2024-07-23 01:50:44.821155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.725 [2024-07-23 01:50:44.821189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.725 [2024-07-23 01:50:44.821209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.832609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.832666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.832684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.843895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.843943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.843963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.855798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.855829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.855846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.867018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.867053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.867073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.877887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.877919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.877951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.888926] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.983 [2024-07-23 01:50:44.888956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.983 [2024-07-23 01:50:44.888973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.983 [2024-07-23 01:50:44.900104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.900140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.900159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.911490] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.911525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.911545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.922729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.922760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.933908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.933938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.933970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.945862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.945892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.945924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.957149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.957183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.957203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.968283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.968317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.968336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.979524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.979557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.979576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:44.990744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:44.990774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:44.990798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.001907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.001936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.001969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.013887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.013918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.013951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.025154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.025188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.025208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.036811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.036842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.036859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.048288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.048321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.048341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.059628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.059676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.059693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.070922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:31.984 [2024-07-23 01:50:45.070969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.984 [2024-07-23 01:50:45.070989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.984 [2024-07-23 01:50:45.082277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.082313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.082334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.094384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.094419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.094439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.105623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.105671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.105688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.116744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.116773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.116790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.127856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.127886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.127903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.139736] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.242 [2024-07-23 01:50:45.139766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.242 [2024-07-23 01:50:45.139783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.242 [2024-07-23 01:50:45.151155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.151189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.151208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.162346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.162380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.162399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.173625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.173674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.173691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.184763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.184807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.184829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.196393] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.196427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.196447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.207609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.207650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.207670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.243 [2024-07-23 01:50:45.218762] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ef3de0) 00:29:32.243 [2024-07-23 01:50:45.218791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.243 [2024-07-23 01:50:45.218808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.243 00:29:32.243 Latency(us) 00:29:32.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:32.243 nvme0n1 : 2.00 2780.54 347.57 0.00 0.00 5750.28 4975.88 14078.10 00:29:32.243 =================================================================================================================== 00:29:32.243 Total : 2780.54 347.57 0.00 0.00 5750.28 4975.88 14078.10 00:29:32.243 0 00:29:32.243 01:50:45 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.243 01:50:45 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.243 01:50:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.243 01:50:45 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.243 | .driver_specific 00:29:32.243 | .nvme_error 00:29:32.243 | .status_code 00:29:32.243 | .command_transient_transport_error' 00:29:32.501 01:50:45 -- host/digest.sh@71 -- # (( 179 > 0 )) 00:29:32.501 01:50:45 -- host/digest.sh@73 -- # killprocess 3899480 00:29:32.501 01:50:45 -- common/autotest_common.sh@926 -- # '[' -z 3899480 ']' 00:29:32.501 01:50:45 -- common/autotest_common.sh@930 -- # kill -0 3899480 00:29:32.501 01:50:45 -- common/autotest_common.sh@931 -- # uname 00:29:32.501 01:50:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:32.501 01:50:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3899480 00:29:32.501 01:50:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:32.501 01:50:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:32.501 01:50:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3899480' 00:29:32.501 killing process with pid 3899480 00:29:32.501 01:50:45 -- common/autotest_common.sh@945 -- # kill 3899480 00:29:32.501 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.501 00:29:32.501 Latency(us) 00:29:32.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.501 =================================================================================================================== 00:29:32.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.501 01:50:45 -- common/autotest_common.sh@950 -- # wait 3899480 00:29:32.760 01:50:45 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:32.760 01:50:45 -- host/digest.sh@54 -- # local rw bs qd 00:29:32.760 01:50:45 -- host/digest.sh@56 -- # rw=randwrite 00:29:32.760 01:50:45 -- host/digest.sh@56 -- # bs=4096 00:29:32.760 01:50:45 -- host/digest.sh@56 -- # qd=128 00:29:32.760 01:50:45 -- host/digest.sh@58 -- # bperfpid=3900031 00:29:32.760 01:50:45 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:32.761 01:50:45 -- host/digest.sh@60 -- # waitforlisten 3900031 /var/tmp/bperf.sock 00:29:32.761 01:50:45 -- common/autotest_common.sh@819 -- # '[' -z 3900031 ']' 00:29:32.761 01:50:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:32.761 01:50:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:32.761 01:50:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:32.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:32.761 01:50:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:32.761 01:50:45 -- common/autotest_common.sh@10 -- # set +x 00:29:32.761 [2024-07-23 01:50:45.762565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:32.761 [2024-07-23 01:50:45.762671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900031 ] 00:29:32.761 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.761 [2024-07-23 01:50:45.835545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.019 [2024-07-23 01:50:45.933219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.019 01:50:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:33.019 01:50:46 -- common/autotest_common.sh@852 -- # return 0 00:29:33.019 01:50:46 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.019 01:50:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.277 01:50:46 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:33.277 01:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.277 01:50:46 -- common/autotest_common.sh@10 -- # set +x 00:29:33.277 01:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.277 01:50:46 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.277 01:50:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.535 nvme0n1 00:29:33.793 01:50:46 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:33.793 01:50:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:33.793 01:50:46 -- common/autotest_common.sh@10 -- # set +x 00:29:33.793 01:50:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:33.793 01:50:46 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:33.793 01:50:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:33.793 Running I/O for 2 seconds... 00:29:33.793 [2024-07-23 01:50:46.764399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ee5c8 00:29:33.793 [2024-07-23 01:50:46.765778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.765819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.777351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6738 00:29:33.793 [2024-07-23 01:50:46.778394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.778434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.789362] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f46d0 00:29:33.793 [2024-07-23 01:50:46.790629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.790661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.801502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ecc78 00:29:33.793 [2024-07-23 01:50:46.802757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.802788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.814013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f6020 00:29:33.793 [2024-07-23 01:50:46.815086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.815120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.826540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e3498 00:29:33.793 [2024-07-23 01:50:46.828079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.828108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.838611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e7818 00:29:33.793 [2024-07-23 01:50:46.840173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.840201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.850667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f3e60 00:29:33.793 [2024-07-23 01:50:46.852219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.852247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.862689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f46d0 00:29:33.793 [2024-07-23 01:50:46.864280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.864308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.874550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ee190 00:29:33.793 [2024-07-23 01:50:46.876230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.793 [2024-07-23 01:50:46.876265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:33.793 [2024-07-23 01:50:46.886562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e88f8 00:29:33.793 [2024-07-23 01:50:46.888200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.794 [2024-07-23 01:50:46.888234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.898529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6300 00:29:34.052 [2024-07-23 01:50:46.899861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.899892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.911237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6300 00:29:34.052 [2024-07-23 01:50:46.912757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.912803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.923882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f7100 00:29:34.052 [2024-07-23 01:50:46.925079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.925118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.936473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f7da8 00:29:34.052 [2024-07-23 01:50:46.937939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.937974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.948919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8618 00:29:34.052 [2024-07-23 01:50:46.950381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.950415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.961320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8a50 00:29:34.052 [2024-07-23 01:50:46.962790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.962820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.973626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8a50 00:29:34.052 [2024-07-23 01:50:46.975171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.975205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.986158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8618 00:29:34.052 [2024-07-23 01:50:46.987686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:46.987717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:46.998495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f4b08 00:29:34.052 [2024-07-23 01:50:47.000049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.000082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.009170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f35f0 00:29:34.052 [2024-07-23 01:50:47.010125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.010159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.021590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190eea00 00:29:34.052 [2024-07-23 01:50:47.022528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.022562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.034140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f35f0 00:29:34.052 [2024-07-23 01:50:47.035123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.035157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.046565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8618 00:29:34.052 [2024-07-23 01:50:47.047557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:18846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.047590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.058893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f2510 00:29:34.052 [2024-07-23 01:50:47.059932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.059961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.071351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e3498 00:29:34.052 [2024-07-23 01:50:47.072359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.072392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.083707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e0ea0 00:29:34.052 [2024-07-23 01:50:47.084700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.084744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.096130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f5378 00:29:34.052 [2024-07-23 01:50:47.097262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.097296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.108605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e0ea0 00:29:34.052 [2024-07-23 01:50:47.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.109705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.120954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e3498 00:29:34.052 [2024-07-23 01:50:47.121995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.122029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.133349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e1710 00:29:34.052 [2024-07-23 01:50:47.134509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.052 [2024-07-23 01:50:47.134542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.052 [2024-07-23 01:50:47.147342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f92c0 00:29:34.053 [2024-07-23 01:50:47.148723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.053 [2024-07-23 01:50:47.148758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.159811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e5ec8 00:29:34.311 [2024-07-23 01:50:47.161211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.161244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.170512] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f5378 00:29:34.311 [2024-07-23 01:50:47.171754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.171783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.182946] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f20d8 00:29:34.311 [2024-07-23 01:50:47.184259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:21623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.184293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.195467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f20d8 00:29:34.311 [2024-07-23 01:50:47.196707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.196736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.209458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f20d8 00:29:34.311 [2024-07-23 01:50:47.210835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.210870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.221859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e8d30 00:29:34.311 [2024-07-23 01:50:47.223192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.232553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8e88 00:29:34.311 [2024-07-23 01:50:47.233718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.233746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.246470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f8e88 00:29:34.311 [2024-07-23 01:50:47.247836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.247867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.258847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190eaab8 00:29:34.311 [2024-07-23 01:50:47.260170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.260203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.271259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190eaab8 00:29:34.311 [2024-07-23 01:50:47.272582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.272625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.282005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e01f8 00:29:34.311 [2024-07-23 01:50:47.283189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.283223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.296091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e01f8 00:29:34.311 [2024-07-23 01:50:47.297405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.297438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.308625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6fa8 00:29:34.311 [2024-07-23 01:50:47.309948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.309992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.321000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6b70 00:29:34.311 [2024-07-23 01:50:47.322242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.322276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.333304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e6300 00:29:34.311 [2024-07-23 01:50:47.334545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.334579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.345546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f46d0 00:29:34.311 [2024-07-23 01:50:47.346799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.346845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.357847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ed4e8 00:29:34.311 [2024-07-23 01:50:47.359038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.359072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.370150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f57b0 00:29:34.311 [2024-07-23 01:50:47.371288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.371323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.381755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.311 [2024-07-23 01:50:47.383855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.383888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.393381] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190efae0 00:29:34.311 [2024-07-23 01:50:47.394121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.394154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:34.311 [2024-07-23 01:50:47.405960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef6a8 00:29:34.311 [2024-07-23 01:50:47.406983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.311 [2024-07-23 01:50:47.407017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.418444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190eee38 00:29:34.570 [2024-07-23 01:50:47.419453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.419486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.430817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f1868 00:29:34.570 [2024-07-23 01:50:47.431853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.431883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.443270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f20d8 00:29:34.570 [2024-07-23 01:50:47.444307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:20522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.444340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.455690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f4298 00:29:34.570 [2024-07-23 01:50:47.456718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.456748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.468027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f0788 00:29:34.570 [2024-07-23 01:50:47.469130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.469164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.480560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f0350 00:29:34.570 [2024-07-23 01:50:47.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.481695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.493070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ea248 00:29:34.570 [2024-07-23 01:50:47.494212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.494245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.505627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.506731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.506775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.518009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.519119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.519152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.530419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.531543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.531583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.542871] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.544057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.544092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.555407] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.556542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.556577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.567935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.569140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.569174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.580508] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.581740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.581768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.592872] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.594072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.594105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.605403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.606629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.606689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.617856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.619109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.619143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.630356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.631591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.631638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.642979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.644288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.644322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.655513] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.570 [2024-07-23 01:50:47.656860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.570 [2024-07-23 01:50:47.656888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:34.570 [2024-07-23 01:50:47.667961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.669285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.669319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.680455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.681799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.681827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.692869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.694217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.694251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.705433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.706754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.706797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.717940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.719294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.719331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.730375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ea680 00:29:34.830 [2024-07-23 01:50:47.731751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.731780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.742795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e8d30 00:29:34.830 [2024-07-23 01:50:47.744154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.744188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.755201] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e8d30 00:29:34.830 [2024-07-23 01:50:47.756543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.756578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.767490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190ef270 00:29:34.830 [2024-07-23 01:50:47.768792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.768821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.779833] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e5a90 00:29:34.830 [2024-07-23 01:50:47.781099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.781134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.792230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f0bc0 00:29:34.830 [2024-07-23 01:50:47.793391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.793425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.804665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190f6cc8 00:29:34.830 [2024-07-23 01:50:47.806184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.806220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.818776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.819209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.819243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.832245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.832564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.832597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.845848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.846192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.846226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.859262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.859581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.859629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.872799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.873179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.873213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.886343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.886707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.886752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.899800] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.900181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.900215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.913382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.913755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.913785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:34.830 [2024-07-23 01:50:47.926994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:34.830 [2024-07-23 01:50:47.927309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.830 [2024-07-23 01:50:47.927342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:47.940451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:47.940796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:47.940828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:47.953923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:47.954265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:47.954299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:47.967467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:47.967784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:47.967815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:47.980981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:47.981301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:47.981335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:47.994498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:47.994862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:47.994907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:48.008002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:48.008345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:48.008379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:48.021519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:48.021962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:48.021996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:48.035130] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:48.035472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:48.035507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:48.048802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.091 [2024-07-23 01:50:48.049149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.091 [2024-07-23 01:50:48.049182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.091 [2024-07-23 01:50:48.062423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.062792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.062822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.075885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.076231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.076265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.089310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.089656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.089703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.102724] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.103123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.103156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.116240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.116553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.116587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.129749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.130089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.130123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.143222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.143567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.143599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.156721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.157049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.157082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.170182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.170491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.170525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.092 [2024-07-23 01:50:48.183643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.092 [2024-07-23 01:50:48.183959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.092 [2024-07-23 01:50:48.183995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.197198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.197539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.197573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.210729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.211025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.211064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.224229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.224540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.224574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.237699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.238117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.238150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.251136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.251474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.251508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.264744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.265074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.265109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.278216] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.278527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.278562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.291722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.292089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.292124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.305251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.305592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.305634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.318966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.319319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.319353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.332534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.332906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.332953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.346050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.346391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.346425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.359577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.360077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.360111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.373117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.373461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.373495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.386584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.386943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.386991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.400067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.400408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.400442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.413607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.414010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.414044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.427055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.427413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.427447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.355 [2024-07-23 01:50:48.440502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.355 [2024-07-23 01:50:48.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.355 [2024-07-23 01:50:48.440867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.454045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.454384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.454418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.467492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.467843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.467875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.481090] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.481429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:25103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.481464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.494561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.494970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.508066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.508405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.508439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.521590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.521952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.521987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.535152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.535491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.535526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.548677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.549063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.549096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.562411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.562756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.562792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.576015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.576356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.576389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.589680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.590041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.590074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.603178] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.603486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:24854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.603520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.616555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.616951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.616984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.630048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.630359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.630392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.643561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.643943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.643991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.657274] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.657628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.657677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.670753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.671128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.671160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.684310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.684675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.684704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.697763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.698122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.698155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.647 [2024-07-23 01:50:48.711231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.647 [2024-07-23 01:50:48.711581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.647 [2024-07-23 01:50:48.711621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.905 [2024-07-23 01:50:48.724677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.906 [2024-07-23 01:50:48.725025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.906 [2024-07-23 01:50:48.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.906 [2024-07-23 01:50:48.737715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.906 [2024-07-23 01:50:48.738073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.906 [2024-07-23 01:50:48.738102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.906 [2024-07-23 01:50:48.750160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5260) with pdu=0x2000190e9168 00:29:35.906 [2024-07-23 01:50:48.750548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.906 [2024-07-23 01:50:48.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.906 00:29:35.906 Latency(us) 00:29:35.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.906 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.906 nvme0n1 : 2.01 19804.07 77.36 0.00 0.00 6449.17 3131.16 13689.74 00:29:35.906 =================================================================================================================== 00:29:35.906 Total : 19804.07 77.36 0.00 0.00 6449.17 3131.16 13689.74 00:29:35.906 0 00:29:35.906 01:50:48 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:35.906 01:50:48 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:35.906 01:50:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:35.906 01:50:48 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:35.906 | .driver_specific 00:29:35.906 | .nvme_error 00:29:35.906 | .status_code 00:29:35.906 | .command_transient_transport_error' 00:29:36.164 01:50:49 -- host/digest.sh@71 -- # (( 155 > 0 )) 00:29:36.164 01:50:49 -- host/digest.sh@73 -- # killprocess 3900031 00:29:36.164 01:50:49 -- common/autotest_common.sh@926 -- # '[' -z 3900031 ']' 00:29:36.164 01:50:49 -- common/autotest_common.sh@930 -- # kill -0 3900031 00:29:36.164 01:50:49 -- common/autotest_common.sh@931 -- # uname 00:29:36.164 01:50:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:36.164 01:50:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3900031 00:29:36.164 01:50:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:36.164 01:50:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:36.164 01:50:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3900031' 00:29:36.164 killing process with pid 3900031 00:29:36.164 01:50:49 -- common/autotest_common.sh@945 -- # kill 3900031 00:29:36.164 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.164 00:29:36.164 Latency(us) 00:29:36.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.164 =================================================================================================================== 00:29:36.164 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.164 01:50:49 -- common/autotest_common.sh@950 -- # wait 3900031 00:29:36.164 01:50:49 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:36.164 01:50:49 -- host/digest.sh@54 -- # local rw bs qd 00:29:36.164 01:50:49 -- host/digest.sh@56 -- # rw=randwrite 00:29:36.424 01:50:49 -- host/digest.sh@56 -- # bs=131072 00:29:36.424 01:50:49 -- host/digest.sh@56 -- # qd=16 00:29:36.424 01:50:49 -- host/digest.sh@58 -- # bperfpid=3900455 00:29:36.424 01:50:49 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:36.424 01:50:49 -- host/digest.sh@60 -- # waitforlisten 3900455 /var/tmp/bperf.sock 00:29:36.424 01:50:49 -- common/autotest_common.sh@819 -- # '[' -z 3900455 ']' 00:29:36.424 01:50:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.424 01:50:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:36.424 01:50:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.424 01:50:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:36.424 01:50:49 -- common/autotest_common.sh@10 -- # set +x 00:29:36.424 [2024-07-23 01:50:49.299727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:36.424 [2024-07-23 01:50:49.299804] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3900455 ] 00:29:36.424 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.424 Zero copy mechanism will not be used. 00:29:36.424 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.424 [2024-07-23 01:50:49.358656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.424 [2024-07-23 01:50:49.443304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.360 01:50:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:37.360 01:50:50 -- common/autotest_common.sh@852 -- # return 0 00:29:37.360 01:50:50 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.360 01:50:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.617 01:50:50 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.617 01:50:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.617 01:50:50 -- common/autotest_common.sh@10 -- # set +x 00:29:37.617 01:50:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.617 01:50:50 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.617 01:50:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.875 nvme0n1 00:29:38.133 01:50:50 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:38.133 01:50:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.133 01:50:50 -- common/autotest_common.sh@10 -- # set +x 00:29:38.133 01:50:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.133 01:50:50 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.133 01:50:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.133 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.133 Zero copy mechanism will not be used. 00:29:38.133 Running I/O for 2 seconds... 00:29:38.133 [2024-07-23 01:50:51.116579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.117083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.117128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.131984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.132244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.132290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.147662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.147889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.147937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.163159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.163576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.163607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.179272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.179746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.179781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.195330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.195772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.195806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.210310] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.210760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.210796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.133 [2024-07-23 01:50:51.225997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.133 [2024-07-23 01:50:51.226478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.133 [2024-07-23 01:50:51.226517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.242378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.242799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.242846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.257947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.258517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.258551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.275105] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.275522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.275553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.290696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.291101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.291136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.306719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.307039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.307073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.321471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.321869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.321956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.335501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.336068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.336118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.352053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.352505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.352537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.368291] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.368604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.368666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.383354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.383713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.383745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.397963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.398393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.398423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.415305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.415779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.415809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.430578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.431038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.431092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.447725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.448126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.448156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.463031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.463407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.463460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.392 [2024-07-23 01:50:51.479089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.392 [2024-07-23 01:50:51.479490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.392 [2024-07-23 01:50:51.479520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.495500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.496149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.496202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.512604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.513070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.513100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.527583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.528093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.528125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.543709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.544219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.544249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.559736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.560054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.560103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.575718] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.576136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.576188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.592843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.593262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.593310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.609772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.610471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.610501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.625237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.625797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.625830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.641943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.642208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.642243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.658035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.658502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.658540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.674729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.675140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.675174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.691422] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.691885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.691938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.708595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.709025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.709058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.724470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.724916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.724946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.651 [2024-07-23 01:50:51.741215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.651 [2024-07-23 01:50:51.741678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.651 [2024-07-23 01:50:51.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.757608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.758203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.758240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.773887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.774363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.774413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.789398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.789903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.806620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.807127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.823653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.824016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.824048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.841032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.841592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.841631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.858668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.859049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.859081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.875855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.876381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.876447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.892802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.893442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.893489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.908655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.909133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.909168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.923638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.924068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.924103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.939293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.939805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.939838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.956710] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.957079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.957113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.973668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.974058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.974088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:51.989223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:51.989558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:51.989595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.910 [2024-07-23 01:50:52.006330] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:38.910 [2024-07-23 01:50:52.006857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.910 [2024-07-23 01:50:52.006924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.023041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.023516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.038721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.039137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.039170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.055099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.055411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.055443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.071886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.072277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.072307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.089272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.089711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.089741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.105979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.106502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.106556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.122795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.123142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.123178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.138360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.139089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.139127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.154928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.155430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.155465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.170706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.171222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.171260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.185820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.186310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.186339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.202962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.203453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.203507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.219765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.220184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.220220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.233753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.234244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.234300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.169 [2024-07-23 01:50:52.250136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.169 [2024-07-23 01:50:52.250548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.169 [2024-07-23 01:50:52.250583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.267935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.268413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.268459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.285822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.286423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.286459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.301745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.302261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.302289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.317823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.318328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.318362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.333891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.334391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.334427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.350942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.351251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.351294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.367972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.368408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.368439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.384707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.385146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.385178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.401161] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.401700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.401733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.417734] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.418208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.418243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.428 [2024-07-23 01:50:52.434903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.428 [2024-07-23 01:50:52.435388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.428 [2024-07-23 01:50:52.435428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.429 [2024-07-23 01:50:52.452793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.429 [2024-07-23 01:50:52.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.429 [2024-07-23 01:50:52.453294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.429 [2024-07-23 01:50:52.468740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.429 [2024-07-23 01:50:52.469259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.429 [2024-07-23 01:50:52.469296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.429 [2024-07-23 01:50:52.485224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.429 [2024-07-23 01:50:52.485624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.429 [2024-07-23 01:50:52.485688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.429 [2024-07-23 01:50:52.501817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.429 [2024-07-23 01:50:52.502150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.429 [2024-07-23 01:50:52.502187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.429 [2024-07-23 01:50:52.517942] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.429 [2024-07-23 01:50:52.518457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.429 [2024-07-23 01:50:52.518490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.535041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.535582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.535624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.551371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.551957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.552005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.566919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.567337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.567366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.583832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.584147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.584181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.600638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.601048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.601077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.618258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.618601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.618646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.634929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.635469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.635497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.651771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.652287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.652329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.667993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.668462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.668491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.684240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.684831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.687 [2024-07-23 01:50:52.684861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.687 [2024-07-23 01:50:52.701816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.687 [2024-07-23 01:50:52.702318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.702365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.688 [2024-07-23 01:50:52.718416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.688 [2024-07-23 01:50:52.718896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.718934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.688 [2024-07-23 01:50:52.735443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.688 [2024-07-23 01:50:52.735962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.735999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.688 [2024-07-23 01:50:52.752210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.688 [2024-07-23 01:50:52.752656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.752704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.688 [2024-07-23 01:50:52.767995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.688 [2024-07-23 01:50:52.768468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.768497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.688 [2024-07-23 01:50:52.785086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.688 [2024-07-23 01:50:52.785497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.688 [2024-07-23 01:50:52.785539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.802522] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.803117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.803178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.819147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.819451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.819503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.835015] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.835412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.835446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.851491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.851851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.851882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.867076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.867627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.867659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.883413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.883719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.883750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.899571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.900014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.900043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.916163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.916651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.932001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.932558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.932608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.948661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.949181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.949231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.964982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.965534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.965587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.982305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.982687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.982719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:52.998368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:52.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:52.998994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:53.015555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:53.015923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:53.015959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.946 [2024-07-23 01:50:53.031870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:39.946 [2024-07-23 01:50:53.032508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.946 [2024-07-23 01:50:53.032545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.205 [2024-07-23 01:50:53.049328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:40.205 [2024-07-23 01:50:53.049796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.205 [2024-07-23 01:50:53.049828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.205 [2024-07-23 01:50:53.066234] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:40.205 [2024-07-23 01:50:53.066848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.205 [2024-07-23 01:50:53.066892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.205 [2024-07-23 01:50:53.081313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:40.205 [2024-07-23 01:50:53.081829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.205 [2024-07-23 01:50:53.081877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.205 [2024-07-23 01:50:53.098171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x12c5400) with pdu=0x2000190fef90 00:29:40.205 [2024-07-23 01:50:53.098582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.205 [2024-07-23 01:50:53.098629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.205 00:29:40.205 Latency(us) 00:29:40.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.205 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:40.205 nvme0n1 : 2.01 1891.77 236.47 0.00 0.00 8435.71 6068.15 19126.80 00:29:40.205 =================================================================================================================== 00:29:40.205 Total : 1891.77 236.47 0.00 0.00 8435.71 6068.15 19126.80 00:29:40.205 0 00:29:40.205 01:50:53 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.205 01:50:53 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.205 01:50:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.205 01:50:53 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.205 | .driver_specific 00:29:40.205 | .nvme_error 00:29:40.205 | .status_code 00:29:40.205 | .command_transient_transport_error' 00:29:40.462 01:50:53 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:29:40.462 01:50:53 -- host/digest.sh@73 -- # killprocess 3900455 00:29:40.462 01:50:53 -- common/autotest_common.sh@926 -- # '[' -z 3900455 ']' 00:29:40.462 01:50:53 -- common/autotest_common.sh@930 -- # kill -0 3900455 00:29:40.462 01:50:53 -- common/autotest_common.sh@931 -- # uname 00:29:40.462 01:50:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:40.462 01:50:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3900455 00:29:40.462 01:50:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:40.462 01:50:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:40.462 01:50:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3900455' 00:29:40.462 killing process with pid 3900455 00:29:40.462 01:50:53 -- common/autotest_common.sh@945 -- # kill 3900455 00:29:40.463 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.463 00:29:40.463 Latency(us) 00:29:40.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.463 =================================================================================================================== 00:29:40.463 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.463 01:50:53 -- common/autotest_common.sh@950 -- # wait 3900455 00:29:40.720 01:50:53 -- host/digest.sh@115 -- # killprocess 3898902 00:29:40.720 01:50:53 -- common/autotest_common.sh@926 -- # '[' -z 3898902 ']' 00:29:40.720 01:50:53 -- common/autotest_common.sh@930 -- # kill -0 3898902 00:29:40.720 01:50:53 -- common/autotest_common.sh@931 -- # uname 00:29:40.720 01:50:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:40.720 01:50:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3898902 00:29:40.720 01:50:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:40.720 01:50:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:40.720 01:50:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3898902' 00:29:40.720 killing process with pid 3898902 00:29:40.720 01:50:53 -- common/autotest_common.sh@945 -- # kill 3898902 00:29:40.720 01:50:53 -- common/autotest_common.sh@950 -- # wait 3898902 00:29:40.980 00:29:40.980 real 0m17.320s 00:29:40.980 user 0m34.919s 00:29:40.980 sys 0m4.317s 00:29:40.980 01:50:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:40.980 01:50:53 -- common/autotest_common.sh@10 -- # set +x 00:29:40.980 ************************************ 00:29:40.980 END TEST nvmf_digest_error 00:29:40.980 ************************************ 00:29:40.980 01:50:53 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:40.980 01:50:53 -- host/digest.sh@139 -- # nvmftestfini 00:29:40.980 01:50:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:40.980 01:50:53 -- nvmf/common.sh@116 -- # sync 00:29:40.980 01:50:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:40.980 01:50:53 -- nvmf/common.sh@119 -- # set +e 00:29:40.980 01:50:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:40.980 01:50:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:40.980 rmmod nvme_tcp 00:29:40.980 rmmod nvme_fabrics 00:29:40.980 rmmod nvme_keyring 00:29:40.980 01:50:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:40.980 01:50:53 -- nvmf/common.sh@123 -- # set -e 00:29:40.980 01:50:53 -- nvmf/common.sh@124 -- # return 0 00:29:40.980 01:50:53 -- nvmf/common.sh@477 -- # '[' -n 3898902 ']' 00:29:40.980 01:50:53 -- nvmf/common.sh@478 -- # killprocess 3898902 00:29:40.980 01:50:53 -- common/autotest_common.sh@926 -- # '[' -z 3898902 ']' 00:29:40.980 01:50:53 -- common/autotest_common.sh@930 -- # kill -0 3898902 00:29:40.980 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3898902) - No such process 00:29:40.980 01:50:53 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3898902 is not found' 00:29:40.980 Process with pid 3898902 is not found 00:29:40.980 01:50:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:40.980 01:50:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:40.980 01:50:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:40.980 01:50:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.980 01:50:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:40.980 01:50:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.980 01:50:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.980 01:50:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.515 01:50:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:43.515 00:29:43.515 real 0m36.743s 00:29:43.515 user 1m5.567s 00:29:43.515 sys 0m10.044s 00:29:43.515 01:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:43.515 01:50:56 -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 ************************************ 00:29:43.515 END TEST nvmf_digest 00:29:43.515 ************************************ 00:29:43.515 01:50:56 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:43.515 01:50:56 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:43.515 01:50:56 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:43.515 01:50:56 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:43.515 01:50:56 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:43.515 01:50:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:43.515 01:50:56 -- common/autotest_common.sh@10 -- # set +x 00:29:43.515 ************************************ 00:29:43.515 START TEST nvmf_bdevperf 00:29:43.515 ************************************ 00:29:43.515 01:50:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:43.515 * Looking for test storage... 00:29:43.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.515 01:50:56 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.515 01:50:56 -- nvmf/common.sh@7 -- # uname -s 00:29:43.515 01:50:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.515 01:50:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.515 01:50:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.515 01:50:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.515 01:50:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.515 01:50:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.515 01:50:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.515 01:50:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.515 01:50:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.515 01:50:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.515 01:50:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.515 01:50:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:43.515 01:50:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.515 01:50:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.515 01:50:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.515 01:50:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.515 01:50:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.515 01:50:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.515 01:50:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.516 01:50:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.516 01:50:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.516 01:50:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.516 01:50:56 -- paths/export.sh@5 -- # export PATH 00:29:43.516 01:50:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.516 01:50:56 -- nvmf/common.sh@46 -- # : 0 00:29:43.516 01:50:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:43.516 01:50:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:43.516 01:50:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:43.516 01:50:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.516 01:50:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.516 01:50:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:43.516 01:50:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:43.516 01:50:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:43.516 01:50:56 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.516 01:50:56 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.516 01:50:56 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:43.516 01:50:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:43.516 01:50:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.516 01:50:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:43.516 01:50:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:43.516 01:50:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:43.516 01:50:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.516 01:50:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:43.516 01:50:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.516 01:50:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:43.516 01:50:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:43.516 01:50:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:43.516 01:50:56 -- common/autotest_common.sh@10 -- # set +x 00:29:45.420 01:50:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:45.420 01:50:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:45.420 01:50:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:45.420 01:50:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:45.420 01:50:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:45.420 01:50:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:45.420 01:50:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:45.420 01:50:58 -- nvmf/common.sh@294 -- # net_devs=() 00:29:45.420 01:50:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:45.420 01:50:58 -- nvmf/common.sh@295 -- # e810=() 00:29:45.420 01:50:58 -- nvmf/common.sh@295 -- # local -ga e810 00:29:45.420 01:50:58 -- nvmf/common.sh@296 -- # x722=() 00:29:45.420 01:50:58 -- nvmf/common.sh@296 -- # local -ga x722 00:29:45.420 01:50:58 -- nvmf/common.sh@297 -- # mlx=() 00:29:45.420 01:50:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:45.420 01:50:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.420 01:50:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.420 01:50:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.420 01:50:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.420 01:50:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.420 01:50:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.421 01:50:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.421 01:50:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:45.421 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:45.421 01:50:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.421 01:50:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:45.421 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:45.421 01:50:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.421 01:50:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.421 01:50:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.421 01:50:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:45.421 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:45.421 01:50:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.421 01:50:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.421 01:50:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.421 01:50:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:45.421 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:45.421 01:50:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:45.421 01:50:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:45.421 01:50:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:45.421 01:50:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.421 01:50:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:45.421 01:50:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:45.421 01:50:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:45.421 01:50:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:45.421 01:50:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:45.421 01:50:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.421 01:50:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:45.421 01:50:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:45.421 01:50:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:45.421 01:50:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:45.421 01:50:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:45.421 01:50:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:45.421 01:50:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:45.421 01:50:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:45.421 01:50:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:45.421 01:50:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:45.421 01:50:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:45.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:29:45.421 00:29:45.421 --- 10.0.0.2 ping statistics --- 00:29:45.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.421 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:29:45.421 01:50:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:45.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:29:45.421 00:29:45.421 --- 10.0.0.1 ping statistics --- 00:29:45.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.421 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:29:45.421 01:50:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.421 01:50:58 -- nvmf/common.sh@410 -- # return 0 00:29:45.421 01:50:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:45.421 01:50:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.421 01:50:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:45.421 01:50:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.421 01:50:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:45.421 01:50:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:45.421 01:50:58 -- host/bdevperf.sh@25 -- # tgt_init 00:29:45.421 01:50:58 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:45.421 01:50:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:45.421 01:50:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:45.421 01:50:58 -- common/autotest_common.sh@10 -- # set +x 00:29:45.421 01:50:58 -- nvmf/common.sh@469 -- # nvmfpid=3902963 00:29:45.421 01:50:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:45.421 01:50:58 -- nvmf/common.sh@470 -- # waitforlisten 3902963 00:29:45.421 01:50:58 -- common/autotest_common.sh@819 -- # '[' -z 3902963 ']' 00:29:45.421 01:50:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.421 01:50:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:45.421 01:50:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.421 01:50:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:45.421 01:50:58 -- common/autotest_common.sh@10 -- # set +x 00:29:45.421 [2024-07-23 01:50:58.341440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:45.421 [2024-07-23 01:50:58.341507] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.421 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.421 [2024-07-23 01:50:58.405228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:45.421 [2024-07-23 01:50:58.488979] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:45.421 [2024-07-23 01:50:58.489140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.421 [2024-07-23 01:50:58.489157] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.421 [2024-07-23 01:50:58.489170] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.421 [2024-07-23 01:50:58.489253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.421 [2024-07-23 01:50:58.489319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:45.421 [2024-07-23 01:50:58.489321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.356 01:50:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:46.356 01:50:59 -- common/autotest_common.sh@852 -- # return 0 00:29:46.356 01:50:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:46.356 01:50:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 01:50:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.356 01:50:59 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.356 01:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 [2024-07-23 01:50:59.317763] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.356 01:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.356 01:50:59 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:46.356 01:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 Malloc0 00:29:46.356 01:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.356 01:50:59 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:46.356 01:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 01:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.356 01:50:59 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.356 01:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 01:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.356 01:50:59 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.356 01:50:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:46.356 01:50:59 -- common/autotest_common.sh@10 -- # set +x 00:29:46.356 [2024-07-23 01:50:59.379477] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.356 01:50:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:46.356 01:50:59 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:46.356 01:50:59 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:46.356 01:50:59 -- nvmf/common.sh@520 -- # config=() 00:29:46.356 01:50:59 -- nvmf/common.sh@520 -- # local subsystem config 00:29:46.356 01:50:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:46.356 01:50:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:46.356 { 00:29:46.356 "params": { 00:29:46.356 "name": "Nvme$subsystem", 00:29:46.356 "trtype": "$TEST_TRANSPORT", 00:29:46.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:46.356 "adrfam": "ipv4", 00:29:46.356 "trsvcid": "$NVMF_PORT", 00:29:46.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:46.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:46.356 "hdgst": ${hdgst:-false}, 00:29:46.356 "ddgst": ${ddgst:-false} 00:29:46.356 }, 00:29:46.356 "method": "bdev_nvme_attach_controller" 00:29:46.356 } 00:29:46.356 EOF 00:29:46.356 )") 00:29:46.356 01:50:59 -- nvmf/common.sh@542 -- # cat 00:29:46.356 01:50:59 -- nvmf/common.sh@544 -- # jq . 00:29:46.356 01:50:59 -- nvmf/common.sh@545 -- # IFS=, 00:29:46.356 01:50:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:46.356 "params": { 00:29:46.356 "name": "Nvme1", 00:29:46.356 "trtype": "tcp", 00:29:46.356 "traddr": "10.0.0.2", 00:29:46.356 "adrfam": "ipv4", 00:29:46.356 "trsvcid": "4420", 00:29:46.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:46.356 "hdgst": false, 00:29:46.356 "ddgst": false 00:29:46.356 }, 00:29:46.356 "method": "bdev_nvme_attach_controller" 00:29:46.356 }' 00:29:46.356 [2024-07-23 01:50:59.420164] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:46.357 [2024-07-23 01:50:59.420245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903121 ] 00:29:46.357 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.615 [2024-07-23 01:50:59.480530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:46.615 [2024-07-23 01:50:59.569291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.873 Running I/O for 1 seconds... 00:29:48.247 00:29:48.247 Latency(us) 00:29:48.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:48.247 Verification LBA range: start 0x0 length 0x4000 00:29:48.247 Nvme1n1 : 1.01 12756.04 49.83 0.00 0.00 9989.26 1353.20 17379.18 00:29:48.247 =================================================================================================================== 00:29:48.247 Total : 12756.04 49.83 0.00 0.00 9989.26 1353.20 17379.18 00:29:48.247 01:51:01 -- host/bdevperf.sh@30 -- # bdevperfpid=3903342 00:29:48.247 01:51:01 -- host/bdevperf.sh@32 -- # sleep 3 00:29:48.247 01:51:01 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:48.247 01:51:01 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:48.247 01:51:01 -- nvmf/common.sh@520 -- # config=() 00:29:48.247 01:51:01 -- nvmf/common.sh@520 -- # local subsystem config 00:29:48.247 01:51:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:48.247 01:51:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:48.247 { 00:29:48.247 "params": { 00:29:48.247 "name": "Nvme$subsystem", 00:29:48.247 "trtype": "$TEST_TRANSPORT", 00:29:48.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.247 "adrfam": "ipv4", 00:29:48.247 "trsvcid": "$NVMF_PORT", 00:29:48.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.247 "hdgst": ${hdgst:-false}, 00:29:48.247 "ddgst": ${ddgst:-false} 00:29:48.247 }, 00:29:48.247 "method": "bdev_nvme_attach_controller" 00:29:48.247 } 00:29:48.247 EOF 00:29:48.247 )") 00:29:48.247 01:51:01 -- nvmf/common.sh@542 -- # cat 00:29:48.247 01:51:01 -- nvmf/common.sh@544 -- # jq . 00:29:48.247 01:51:01 -- nvmf/common.sh@545 -- # IFS=, 00:29:48.247 01:51:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:48.247 "params": { 00:29:48.247 "name": "Nvme1", 00:29:48.247 "trtype": "tcp", 00:29:48.247 "traddr": "10.0.0.2", 00:29:48.247 "adrfam": "ipv4", 00:29:48.247 "trsvcid": "4420", 00:29:48.247 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.247 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.247 "hdgst": false, 00:29:48.247 "ddgst": false 00:29:48.247 }, 00:29:48.247 "method": "bdev_nvme_attach_controller" 00:29:48.247 }' 00:29:48.247 [2024-07-23 01:51:01.180191] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:48.247 [2024-07-23 01:51:01.180288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3903342 ] 00:29:48.247 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.247 [2024-07-23 01:51:01.241659] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.247 [2024-07-23 01:51:01.325699] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.813 Running I/O for 15 seconds... 00:29:51.346 01:51:04 -- host/bdevperf.sh@33 -- # kill -9 3902963 00:29:51.346 01:51:04 -- host/bdevperf.sh@35 -- # sleep 3 00:29:51.346 [2024-07-23 01:51:04.149842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.149902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.149947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.149972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.149989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.150956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.150985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.151002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.151019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.151035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.151052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.151068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.151086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.151102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.151119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.346 [2024-07-23 01:51:04.151135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.346 [2024-07-23 01:51:04.151153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:124400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.151771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.151862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.151935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.151962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.151993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.152042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.152108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.152146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.152247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.347 [2024-07-23 01:51:04.152280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.347 [2024-07-23 01:51:04.152431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.347 [2024-07-23 01:51:04.152447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:124640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.152960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.152978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.152994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:124240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:124248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.348 [2024-07-23 01:51:04.153716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.348 [2024-07-23 01:51:04.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.348 [2024-07-23 01:51:04.153807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.153837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.153852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.349 [2024-07-23 01:51:04.153866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.153882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.349 [2024-07-23 01:51:04.153935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.153955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.153972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.153990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.349 [2024-07-23 01:51:04.154007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:51.349 [2024-07-23 01:51:04.154077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.349 [2024-07-23 01:51:04.154422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154438] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1554500 is same with the state(5) to be set 00:29:51.349 [2024-07-23 01:51:04.154457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:51.349 [2024-07-23 01:51:04.154470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:51.349 [2024-07-23 01:51:04.154482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124432 len:8 PRP1 0x0 PRP2 0x0 00:29:51.349 [2024-07-23 01:51:04.154496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154571] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1554500 was disconnected and freed. reset controller. 00:29:51.349 [2024-07-23 01:51:04.154679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.349 [2024-07-23 01:51:04.154702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.349 [2024-07-23 01:51:04.154731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.349 [2024-07-23 01:51:04.154758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:51.349 [2024-07-23 01:51:04.154785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:51.349 [2024-07-23 01:51:04.154798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.349 [2024-07-23 01:51:04.157114] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.349 [2024-07-23 01:51:04.157155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.349 [2024-07-23 01:51:04.157756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.157941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.157985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.349 [2024-07-23 01:51:04.158003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.349 [2024-07-23 01:51:04.158172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.349 [2024-07-23 01:51:04.158351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.349 [2024-07-23 01:51:04.158374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.349 [2024-07-23 01:51:04.158397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.349 [2024-07-23 01:51:04.160859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.349 [2024-07-23 01:51:04.170064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.349 [2024-07-23 01:51:04.170488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.170726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.170753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.349 [2024-07-23 01:51:04.170770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.349 [2024-07-23 01:51:04.170937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.349 [2024-07-23 01:51:04.171126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.349 [2024-07-23 01:51:04.171150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.349 [2024-07-23 01:51:04.171166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.349 [2024-07-23 01:51:04.173436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.349 [2024-07-23 01:51:04.182464] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.349 [2024-07-23 01:51:04.182885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.183049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.349 [2024-07-23 01:51:04.183079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.349 [2024-07-23 01:51:04.183096] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.349 [2024-07-23 01:51:04.183280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.349 [2024-07-23 01:51:04.183414] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.349 [2024-07-23 01:51:04.183439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.349 [2024-07-23 01:51:04.183455] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.349 [2024-07-23 01:51:04.185856] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.349 [2024-07-23 01:51:04.195104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.349 [2024-07-23 01:51:04.195485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.195694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.195731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.195750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.195912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.196127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.196152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.196168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.198363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.207790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.208160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.208377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.208403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.208419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.208594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.208742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.208763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.208777] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.211015] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.220160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.220492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.220717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.220746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.220763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.220974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.221146] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.221170] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.221187] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.223544] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.232554] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.232899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.233292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.233354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.233372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.233538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.233727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.233748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.233762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.236062] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.245136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.245564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.245829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.245856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.245873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.246094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.246247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.246271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.246287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.248862] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.257840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.258397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.258667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.258695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.258711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.258874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.259056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.259081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.259097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.261447] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.270357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.350 [2024-07-23 01:51:04.270763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.270987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.350 [2024-07-23 01:51:04.271047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.350 [2024-07-23 01:51:04.271065] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.350 [2024-07-23 01:51:04.271231] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.350 [2024-07-23 01:51:04.271454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.350 [2024-07-23 01:51:04.271478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.350 [2024-07-23 01:51:04.271494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.350 [2024-07-23 01:51:04.273702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.350 [2024-07-23 01:51:04.282894] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.283291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.283609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.283673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.283691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.283847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.284048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.284073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.284089] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.286583] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.295425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.295869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.296217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.296268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.296286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.296434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.296604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.296640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.296657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.298995] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.307822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.308150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.308384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.308417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.308433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.308665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.308836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.308860] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.308876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.311080] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.320432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.320801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.321051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.321102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.321120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.321268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.321438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.321462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.321478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.323713] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.333211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.333583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.333825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.333855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.333873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.334020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.334247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.334271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.334286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.336750] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.345600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.345957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.346156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.346186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.346204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.346371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.346559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.346583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.346599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.348965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.358253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.358582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.358755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.358795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.358816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.358972] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.359142] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.359166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.359182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.361632] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.370697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.371040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.371257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.371282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.371298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.371497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.371693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.371714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.371727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.374055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.351 [2024-07-23 01:51:04.383125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.351 [2024-07-23 01:51:04.383493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.383719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.351 [2024-07-23 01:51:04.383748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.351 [2024-07-23 01:51:04.383765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.351 [2024-07-23 01:51:04.383940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.351 [2024-07-23 01:51:04.384123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.351 [2024-07-23 01:51:04.384147] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.351 [2024-07-23 01:51:04.384163] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.351 [2024-07-23 01:51:04.386684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.352 [2024-07-23 01:51:04.395853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.352 [2024-07-23 01:51:04.396229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.396458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.396484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.352 [2024-07-23 01:51:04.396516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.352 [2024-07-23 01:51:04.396731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.352 [2024-07-23 01:51:04.396917] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.352 [2024-07-23 01:51:04.396936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.352 [2024-07-23 01:51:04.396949] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.352 [2024-07-23 01:51:04.399442] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.352 [2024-07-23 01:51:04.408322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.352 [2024-07-23 01:51:04.408667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.408830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.408857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.352 [2024-07-23 01:51:04.408873] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.352 [2024-07-23 01:51:04.409063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.352 [2024-07-23 01:51:04.409214] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.352 [2024-07-23 01:51:04.409238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.352 [2024-07-23 01:51:04.409254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.352 [2024-07-23 01:51:04.411441] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.352 [2024-07-23 01:51:04.420840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.352 [2024-07-23 01:51:04.421209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.421476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.421505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.352 [2024-07-23 01:51:04.421523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.352 [2024-07-23 01:51:04.421718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.352 [2024-07-23 01:51:04.421888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.352 [2024-07-23 01:51:04.421919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.352 [2024-07-23 01:51:04.421935] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.352 [2024-07-23 01:51:04.424321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.352 [2024-07-23 01:51:04.433415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.352 [2024-07-23 01:51:04.433770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.433965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.352 [2024-07-23 01:51:04.433991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.352 [2024-07-23 01:51:04.434008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.352 [2024-07-23 01:51:04.434228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.352 [2024-07-23 01:51:04.434410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.352 [2024-07-23 01:51:04.434434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.352 [2024-07-23 01:51:04.434450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.352 [2024-07-23 01:51:04.436603] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.445886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.446314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.446524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.446555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.446573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.446785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.446896] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.446929] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.446943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.449228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.458410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.458803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.459029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.459055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.459071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.459250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.459456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.459481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.459497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.461827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.471050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.471368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.471557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.471583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.471599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.471800] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.471964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.471994] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.472011] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.474449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.483505] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.483883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.484142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.484168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.484184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.484350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.484522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.484546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.484562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.486944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.495916] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.496267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.496456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.496485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.496503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.496709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.496879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.496913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.496926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.499285] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.508628] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.508974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.509242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.509294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.509311] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.509477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.509708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.509729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.612 [2024-07-23 01:51:04.509747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.612 [2024-07-23 01:51:04.511912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.612 [2024-07-23 01:51:04.521144] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.612 [2024-07-23 01:51:04.521480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.521662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.612 [2024-07-23 01:51:04.521706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.612 [2024-07-23 01:51:04.521723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.612 [2024-07-23 01:51:04.521884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.612 [2024-07-23 01:51:04.522035] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.612 [2024-07-23 01:51:04.522059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.522075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.524408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.533810] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.534175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.534407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.534436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.534454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.534649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.534793] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.534814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.534827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.537109] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.546204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.546632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.546828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.546858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.546876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.547096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.547302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.547326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.547342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.549785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.558799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.559168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.559506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.559566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.559584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.559764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.559920] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.559957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.559973] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.562326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.571533] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.571890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.572202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.572268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.572286] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.572433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.572631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.572655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.572671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.574938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.584211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.584586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.584754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.584780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.584797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.584973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.585180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.585206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.585222] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.587586] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.596799] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.597135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.597498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.597549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.597567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.597747] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.597872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.597891] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.597904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.600215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.609470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.609851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.610061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.610091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.610109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.610275] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.610444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.610469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.610485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.612808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.622006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.622324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.622458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.622481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.622496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.622630] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.622784] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.613 [2024-07-23 01:51:04.622804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.613 [2024-07-23 01:51:04.622818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.613 [2024-07-23 01:51:04.624949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.613 [2024-07-23 01:51:04.634491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.613 [2024-07-23 01:51:04.634853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.635218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.613 [2024-07-23 01:51:04.635272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.613 [2024-07-23 01:51:04.635289] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.613 [2024-07-23 01:51:04.635438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.613 [2024-07-23 01:51:04.635638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.635662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.635692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.637960] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.614 [2024-07-23 01:51:04.647059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.614 [2024-07-23 01:51:04.647388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.647575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.647604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.614 [2024-07-23 01:51:04.647633] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.614 [2024-07-23 01:51:04.647830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.614 [2024-07-23 01:51:04.648062] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.648088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.648105] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.650364] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.614 [2024-07-23 01:51:04.659691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.614 [2024-07-23 01:51:04.660064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.660251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.660279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.614 [2024-07-23 01:51:04.660297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.614 [2024-07-23 01:51:04.660517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.614 [2024-07-23 01:51:04.660762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.660786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.660801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.663163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.614 [2024-07-23 01:51:04.672165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.614 [2024-07-23 01:51:04.672652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.672858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.672883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.614 [2024-07-23 01:51:04.672899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.614 [2024-07-23 01:51:04.673038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.614 [2024-07-23 01:51:04.673278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.673303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.673320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.675514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.614 [2024-07-23 01:51:04.684726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.614 [2024-07-23 01:51:04.685148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.685387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.685414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.614 [2024-07-23 01:51:04.685430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.614 [2024-07-23 01:51:04.685586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.614 [2024-07-23 01:51:04.685819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.685845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.685862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.688175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.614 [2024-07-23 01:51:04.697291] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.614 [2024-07-23 01:51:04.697676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.697874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.614 [2024-07-23 01:51:04.697916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.614 [2024-07-23 01:51:04.697932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.614 [2024-07-23 01:51:04.698106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.614 [2024-07-23 01:51:04.698284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.614 [2024-07-23 01:51:04.698309] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.614 [2024-07-23 01:51:04.698326] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.614 [2024-07-23 01:51:04.700468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.873 [2024-07-23 01:51:04.709813] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.710158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.710329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.710355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.710376] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.710545] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.710706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.710729] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.710743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.713072] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.722322] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.722745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.722937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.722978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.722996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.723144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.723313] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.723338] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.723355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.725810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.734963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.735340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.735562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.735589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.735639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.735799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.735998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.736024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.736040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.738251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.747436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.747772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.747980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.748005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.748021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.748155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.748393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.748418] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.748434] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.750681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.760015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.760346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.760538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.760568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.760587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.760780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.760914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.760938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.760954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.763372] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.772537] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.772901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.773268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.773328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.773345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.773510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.773699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.773724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.773741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.775923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.785156] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.785552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.785740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.785767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.785784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.785957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.786127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.786151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.786167] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.788478] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.797804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.798204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.798564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.798621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.798655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.798765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.798952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.798978] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.798993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.874 [2024-07-23 01:51:04.801541] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.874 [2024-07-23 01:51:04.810232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.874 [2024-07-23 01:51:04.810688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.810862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.874 [2024-07-23 01:51:04.810898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.874 [2024-07-23 01:51:04.810915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.874 [2024-07-23 01:51:04.811128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.874 [2024-07-23 01:51:04.811263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.874 [2024-07-23 01:51:04.811299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.874 [2024-07-23 01:51:04.811312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.813529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.822820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.823186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.823450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.823498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.823518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.823713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.823856] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.823878] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.823900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.826050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.835571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.835958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.836182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.836226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.836244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.836419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.836604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.836636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.836665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.839092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.848247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.848639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.848790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.848817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.848834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.849012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.849195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.849233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.849246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.851685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.860793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.861193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.861366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.861394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.861410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.861577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.861737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.861761] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.861781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.864204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.873275] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.873668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.873816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.873845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.873862] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.874075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.874281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.874306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.874323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.876634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.885855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.886219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.886433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.886464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.886482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.886680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.886834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.886856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.886876] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.889355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.898329] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.898705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.898885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.898927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.898946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.899113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.899301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.899326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.899347] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.901634] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.910793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.911247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.911461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.911492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.911510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.911732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.911853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.911874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.911889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.875 [2024-07-23 01:51:04.914071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.875 [2024-07-23 01:51:04.923138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.875 [2024-07-23 01:51:04.923529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.923695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.875 [2024-07-23 01:51:04.923726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.875 [2024-07-23 01:51:04.923744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.875 [2024-07-23 01:51:04.923910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.875 [2024-07-23 01:51:04.924098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.875 [2024-07-23 01:51:04.924123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.875 [2024-07-23 01:51:04.924139] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.876 [2024-07-23 01:51:04.926488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.876 [2024-07-23 01:51:04.935547] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.876 [2024-07-23 01:51:04.935932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.936172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.936219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.876 [2024-07-23 01:51:04.936237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.876 [2024-07-23 01:51:04.936458] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.876 [2024-07-23 01:51:04.936694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.876 [2024-07-23 01:51:04.936720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.876 [2024-07-23 01:51:04.936737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.876 [2024-07-23 01:51:04.939017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.876 [2024-07-23 01:51:04.948162] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.876 [2024-07-23 01:51:04.948559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.948780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.948812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.876 [2024-07-23 01:51:04.948831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.876 [2024-07-23 01:51:04.949069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.876 [2024-07-23 01:51:04.949241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.876 [2024-07-23 01:51:04.949267] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.876 [2024-07-23 01:51:04.949283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.876 [2024-07-23 01:51:04.951566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.876 [2024-07-23 01:51:04.960893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.876 [2024-07-23 01:51:04.961254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.961451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.876 [2024-07-23 01:51:04.961482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:51.876 [2024-07-23 01:51:04.961500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:51.876 [2024-07-23 01:51:04.961681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:51.876 [2024-07-23 01:51:04.961815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.876 [2024-07-23 01:51:04.961839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.876 [2024-07-23 01:51:04.961855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.876 [2024-07-23 01:51:04.964191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:04.973382] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:04.973727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.973917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.973948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:04.973966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:04.974133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:04.974302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:04.974327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:04.974344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:04.976516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:04.985874] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:04.986206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.986518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.986570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:04.986588] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:04.986729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:04.986899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:04.986923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:04.986937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:04.989287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:04.998495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:04.998868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.999082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:04.999112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:04.999130] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:04.999332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:04.999446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:04.999471] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:04.999487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.001689] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:05.011050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:05.011359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.011525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.011551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:05.011567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:05.011758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:05.011924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:05.011950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:05.011966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.014303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:05.023607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:05.023947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.024159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.024186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:05.024202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:05.024400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:05.024553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:05.024579] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:05.024595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.027048] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:05.036149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:05.036575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.036777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.036807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:05.036826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:05.037010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:05.037144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:05.037168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:05.037184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.039540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:05.048803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:05.049350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.049587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.049625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:05.049646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:05.049848] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:05.050000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:05.050024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:05.050041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.052252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.136 [2024-07-23 01:51:05.061379] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.136 [2024-07-23 01:51:05.061714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.061910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.136 [2024-07-23 01:51:05.061940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.136 [2024-07-23 01:51:05.061964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.136 [2024-07-23 01:51:05.062132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.136 [2024-07-23 01:51:05.062266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.136 [2024-07-23 01:51:05.062290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.136 [2024-07-23 01:51:05.062307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.136 [2024-07-23 01:51:05.064753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.074027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.074512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.074749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.074780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.074798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.074963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.075132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.075158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.075174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.077421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.086679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.087133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.087325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.087367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.087382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.087584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.087782] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.087808] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.087825] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.090040] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.099109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.099627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.099871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.099898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.099934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.100121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.100281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.100306] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.100323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.102651] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.111598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.111942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.112154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.112184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.112202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.112405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.112556] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.112581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.112597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.114958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.124364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.124779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.125096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.125152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.125172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.125338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.125545] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.125570] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.125586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.128002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.137076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.137646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.137857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.137886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.137905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.138076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.138264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.138289] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.138306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.140760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.149778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.150254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.150588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.150682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.150702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.150851] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.151039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.151063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.151080] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.153177] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.162313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.162676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.162875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.162900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.162916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.163076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.163259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.163284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.163300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.165487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.137 [2024-07-23 01:51:05.174912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.137 [2024-07-23 01:51:05.175337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.175554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.137 [2024-07-23 01:51:05.175584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.137 [2024-07-23 01:51:05.175601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.137 [2024-07-23 01:51:05.175778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.137 [2024-07-23 01:51:05.175956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.137 [2024-07-23 01:51:05.175982] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.137 [2024-07-23 01:51:05.175999] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.137 [2024-07-23 01:51:05.178477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.138 [2024-07-23 01:51:05.187318] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.138 [2024-07-23 01:51:05.187682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.187867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.187896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.138 [2024-07-23 01:51:05.187913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.138 [2024-07-23 01:51:05.188080] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.138 [2024-07-23 01:51:05.188249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.138 [2024-07-23 01:51:05.188274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.138 [2024-07-23 01:51:05.188291] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.138 [2024-07-23 01:51:05.190680] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.138 [2024-07-23 01:51:05.200039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.138 [2024-07-23 01:51:05.200459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.200718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.200750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.138 [2024-07-23 01:51:05.200769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.138 [2024-07-23 01:51:05.200991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.138 [2024-07-23 01:51:05.201143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.138 [2024-07-23 01:51:05.201168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.138 [2024-07-23 01:51:05.201184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.138 [2024-07-23 01:51:05.203304] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.138 [2024-07-23 01:51:05.212567] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.138 [2024-07-23 01:51:05.212894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.213153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.213198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.138 [2024-07-23 01:51:05.213216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.138 [2024-07-23 01:51:05.213383] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.138 [2024-07-23 01:51:05.213535] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.138 [2024-07-23 01:51:05.213565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.138 [2024-07-23 01:51:05.213581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.138 [2024-07-23 01:51:05.215978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.138 [2024-07-23 01:51:05.225057] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.138 [2024-07-23 01:51:05.225605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.225856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.138 [2024-07-23 01:51:05.225886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.138 [2024-07-23 01:51:05.225905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.138 [2024-07-23 01:51:05.226126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.138 [2024-07-23 01:51:05.226314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.138 [2024-07-23 01:51:05.226339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.138 [2024-07-23 01:51:05.226356] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.138 [2024-07-23 01:51:05.228649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.397 [2024-07-23 01:51:05.237638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.397 [2024-07-23 01:51:05.237988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.397 [2024-07-23 01:51:05.238255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.397 [2024-07-23 01:51:05.238286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.397 [2024-07-23 01:51:05.238304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.238489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.238655] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.238681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.238697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.241106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.250193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.250567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.250780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.250812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.250830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.250979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.251167] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.251190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.251212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.253457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.262937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.263378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.263599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.263639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.263659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.263789] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.263976] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.264000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.264016] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.266333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.275683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.275996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.276203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.276240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.276274] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.276476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.276658] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.276684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.276701] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.278815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.288187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.288684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.288875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.288905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.288923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.289072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.289259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.289284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.289301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.291652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.300777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.301170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.301362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.301392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.301410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.301558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.301722] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.301746] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.301762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.304002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.313251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.313634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.313786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.313814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.313830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.313954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.314123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.314146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.314162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.316666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.325992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.326348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.326502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.326532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.326550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.326746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.326952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.326977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.326993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.329342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.338626] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.339004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.339170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.339202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.398 [2024-07-23 01:51:05.339220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.398 [2024-07-23 01:51:05.339387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.398 [2024-07-23 01:51:05.339575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.398 [2024-07-23 01:51:05.339599] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.398 [2024-07-23 01:51:05.339626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.398 [2024-07-23 01:51:05.341816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.398 [2024-07-23 01:51:05.351145] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.398 [2024-07-23 01:51:05.351493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.398 [2024-07-23 01:51:05.351737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.351769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.351788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.351918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.352069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.352093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.352108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.354400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.363887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.364262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.364452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.364482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.364500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.364677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.364864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.364889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.364905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.367252] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.376534] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.376877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.377134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.377174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.377191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.377397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.377567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.377592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.377609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.379998] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.389203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.389564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.389798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.389829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.389847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.390013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.390182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.390207] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.390223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.392523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.401647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.402184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.402497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.402523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.402539] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.402693] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.402880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.402907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.402923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.405183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.414216] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.414589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.414788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.414821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.414838] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.415036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.415187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.415212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.415227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.417679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.426647] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.427003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.427226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.427274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.427293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.427441] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.427639] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.427669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.427686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.429982] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.439108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.439515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.439695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.439726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.439744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.439929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.440080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.440105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.440121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.442379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.451825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.452292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.452533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.452563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.399 [2024-07-23 01:51:05.452586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.399 [2024-07-23 01:51:05.452744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.399 [2024-07-23 01:51:05.452914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.399 [2024-07-23 01:51:05.452947] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.399 [2024-07-23 01:51:05.452964] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.399 [2024-07-23 01:51:05.455229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.399 [2024-07-23 01:51:05.464250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.399 [2024-07-23 01:51:05.464628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.399 [2024-07-23 01:51:05.464822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.400 [2024-07-23 01:51:05.464854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.400 [2024-07-23 01:51:05.464872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.400 [2024-07-23 01:51:05.465039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.400 [2024-07-23 01:51:05.465226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.400 [2024-07-23 01:51:05.465252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.400 [2024-07-23 01:51:05.465268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.400 [2024-07-23 01:51:05.467646] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.400 [2024-07-23 01:51:05.476931] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.400 [2024-07-23 01:51:05.477285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.400 [2024-07-23 01:51:05.477577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.400 [2024-07-23 01:51:05.477645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.400 [2024-07-23 01:51:05.477665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.400 [2024-07-23 01:51:05.477867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.400 [2024-07-23 01:51:05.478073] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.400 [2024-07-23 01:51:05.478098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.400 [2024-07-23 01:51:05.478115] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.400 [2024-07-23 01:51:05.480518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.400 [2024-07-23 01:51:05.489612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.400 [2024-07-23 01:51:05.490021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.400 [2024-07-23 01:51:05.490190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.400 [2024-07-23 01:51:05.490217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.400 [2024-07-23 01:51:05.490233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.400 [2024-07-23 01:51:05.490450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.400 [2024-07-23 01:51:05.490603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.400 [2024-07-23 01:51:05.490640] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.400 [2024-07-23 01:51:05.490658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.400 [2024-07-23 01:51:05.493079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.502005] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.502406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.502634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.502665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.502698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.502860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.503056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.503081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.503098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.505523] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.514635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.515030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.515192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.515222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.515240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.515389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.515576] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.515601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.515625] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.517937] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.527313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.527718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.527941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.527971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.527989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.528172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.528348] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.528374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.528391] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.530913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.539927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.540360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.540513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.540543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.540560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.540734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.540941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.540977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.540994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.543365] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.552484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.552887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.553079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.553109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.553127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.553310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.553497] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.553523] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.553540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.555736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.565247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.565595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.565795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.565825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.565843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.566010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.566179] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.566210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.566228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.568516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.577847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.578255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.578579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.578663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.578681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.578822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.579026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.579051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.579067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.660 [2024-07-23 01:51:05.581471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.660 [2024-07-23 01:51:05.590410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.660 [2024-07-23 01:51:05.590832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.591225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-07-23 01:51:05.591278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.660 [2024-07-23 01:51:05.591298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.660 [2024-07-23 01:51:05.591411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.660 [2024-07-23 01:51:05.591580] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.660 [2024-07-23 01:51:05.591603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.660 [2024-07-23 01:51:05.591628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.593981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.603020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.603512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.603731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.603761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.603779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.603908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.604114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.604138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.604160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.606509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.615507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.615907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.616239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.616292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.616309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.616456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.616656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.616681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.616697] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.618963] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.628138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.628510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.628713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.628741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.628758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.628922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.629135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.629160] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.629176] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.631419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.640558] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.641078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.641410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.641450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.641465] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.641574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.641749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.641774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.641790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.644002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.653112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.653508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.653669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.653697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.653713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.653907] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.654118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.654143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.654160] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.656564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.665665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.666041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.666201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.666228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.666261] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.666426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.666579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.666604] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.666631] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.669024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.678251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.678669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.678841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.678868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.678885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.679081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.679233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.679258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.679275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.681552] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.690809] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.691180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.691387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.691413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.691429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.691638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.691773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.691797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.661 [2024-07-23 01:51:05.691813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.661 [2024-07-23 01:51:05.694106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.661 [2024-07-23 01:51:05.703257] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.661 [2024-07-23 01:51:05.703665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.703855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-07-23 01:51:05.703881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.661 [2024-07-23 01:51:05.703897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.661 [2024-07-23 01:51:05.704069] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.661 [2024-07-23 01:51:05.704222] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.661 [2024-07-23 01:51:05.704247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.662 [2024-07-23 01:51:05.704262] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.662 [2024-07-23 01:51:05.706735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.662 [2024-07-23 01:51:05.715735] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.662 [2024-07-23 01:51:05.716135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.716322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.716352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.662 [2024-07-23 01:51:05.716370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.662 [2024-07-23 01:51:05.716517] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.662 [2024-07-23 01:51:05.716691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.662 [2024-07-23 01:51:05.716713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.662 [2024-07-23 01:51:05.716725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.662 [2024-07-23 01:51:05.719046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.662 [2024-07-23 01:51:05.728327] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.662 [2024-07-23 01:51:05.728714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.728931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.728995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.662 [2024-07-23 01:51:05.729013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.662 [2024-07-23 01:51:05.729179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.662 [2024-07-23 01:51:05.729367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.662 [2024-07-23 01:51:05.729391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.662 [2024-07-23 01:51:05.729408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.662 [2024-07-23 01:51:05.731786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.662 [2024-07-23 01:51:05.740917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.662 [2024-07-23 01:51:05.741271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.741604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.741681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.662 [2024-07-23 01:51:05.741699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.662 [2024-07-23 01:51:05.741855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.662 [2024-07-23 01:51:05.742041] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.662 [2024-07-23 01:51:05.742066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.662 [2024-07-23 01:51:05.742082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.662 [2024-07-23 01:51:05.744503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.662 [2024-07-23 01:51:05.753590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.662 [2024-07-23 01:51:05.754007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.754376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-07-23 01:51:05.754428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.662 [2024-07-23 01:51:05.754446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.662 [2024-07-23 01:51:05.754593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.662 [2024-07-23 01:51:05.754771] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.662 [2024-07-23 01:51:05.754796] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.662 [2024-07-23 01:51:05.754812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.662 [2024-07-23 01:51:05.757171] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.766235] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.766574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.766820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.766857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.766876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.767042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.767212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.922 [2024-07-23 01:51:05.767237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.922 [2024-07-23 01:51:05.767253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.922 [2024-07-23 01:51:05.769528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.778753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.779151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.779332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.779358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.779374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.779502] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.779709] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.922 [2024-07-23 01:51:05.779730] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.922 [2024-07-23 01:51:05.779743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.922 [2024-07-23 01:51:05.782212] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.791315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.791776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.791995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.792021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.792037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.792221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.792437] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.922 [2024-07-23 01:51:05.792462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.922 [2024-07-23 01:51:05.792478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.922 [2024-07-23 01:51:05.794819] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.803751] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.804079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.804376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.804439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.804462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.804593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.804755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.922 [2024-07-23 01:51:05.804776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.922 [2024-07-23 01:51:05.804790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.922 [2024-07-23 01:51:05.807144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.816122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.816516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.816694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.816722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.816739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.816906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.817085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.922 [2024-07-23 01:51:05.817110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.922 [2024-07-23 01:51:05.817126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.922 [2024-07-23 01:51:05.819627] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.922 [2024-07-23 01:51:05.828798] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.922 [2024-07-23 01:51:05.829373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.829624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.922 [2024-07-23 01:51:05.829650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.922 [2024-07-23 01:51:05.829666] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.922 [2024-07-23 01:51:05.829809] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.922 [2024-07-23 01:51:05.829973] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.829997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.830013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.832397] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.841258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.841590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.841782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.841810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.841826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.842014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.842148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.842173] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.842189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.844401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.853880] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.854264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.854523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.854553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.854571] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.854732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.854861] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.854882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.854896] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.857098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.866546] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.866955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.867265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.867294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.867312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.867477] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.867611] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.867647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.867663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.870019] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.879066] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.879438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.879694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.879721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.879738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.879887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.880052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.880077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.880093] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.882280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.891604] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.891897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.892072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.892100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.892117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.892294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.892465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.892489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.892505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.895052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.904314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.904722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.904901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.904928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.904944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.905104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.905305] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.905330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.905346] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.907562] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.916889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.917306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.917536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.917565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.917583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.917748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.917919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.917949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.917966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.920231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.929559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.929919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.930067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.930093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.930125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.930289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.923 [2024-07-23 01:51:05.930514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.923 [2024-07-23 01:51:05.930538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.923 [2024-07-23 01:51:05.930554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.923 [2024-07-23 01:51:05.933123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.923 [2024-07-23 01:51:05.942160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.923 [2024-07-23 01:51:05.942515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.942691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.923 [2024-07-23 01:51:05.942719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.923 [2024-07-23 01:51:05.942735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.923 [2024-07-23 01:51:05.942917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:05.943069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:05.943094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:05.943110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:05.945401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:05.954716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:05.955084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.955285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.955319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:05.955354] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:05.955521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:05.955645] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:05.955686] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:05.955706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:05.958023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:05.967353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:05.967719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.967863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.967890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:05.967906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:05.968122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:05.968273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:05.968298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:05.968314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:05.970629] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:05.979918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:05.980361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.980554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.980583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:05.980602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:05.980794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:05.981029] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:05.981054] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:05.981070] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:05.983370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:05.992207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:05.992566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.992748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:05.992775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:05.992792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:05.992963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:05.993169] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:05.993193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:05.993210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:05.995518] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:06.004674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:06.004955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:06.005161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:06.005191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:06.005209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:06.005358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:06.005492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:06.005516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:06.005532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:52.924 [2024-07-23 01:51:06.007956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:52.924 [2024-07-23 01:51:06.017222] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:52.924 [2024-07-23 01:51:06.017558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:06.017721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.924 [2024-07-23 01:51:06.017751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:52.924 [2024-07-23 01:51:06.017769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:52.924 [2024-07-23 01:51:06.017934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:52.924 [2024-07-23 01:51:06.018086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:52.924 [2024-07-23 01:51:06.018110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:52.924 [2024-07-23 01:51:06.018126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.020370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.029853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.030252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.030446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.030475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.030492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.030632] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.030820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.030844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.030860] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.033135] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.042301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.042674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.042868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.042898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.042917] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.043065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.043253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.043277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.043294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.045636] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.054831] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.055236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.055442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.055471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.055490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.055666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.055873] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.055898] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.055914] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.058244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.067361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.067707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.067905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.067946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.067962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.068142] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.068312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.068336] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.068352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.070675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.080119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.080499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.080696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.080726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.080745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.080911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.081045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.081069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.081085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.083417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.092666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.093126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.093354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.093383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.093401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.093585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.093802] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.093828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.093844] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.096210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.105384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.105754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.105981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.106007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.106038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.106216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.106332] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.106356] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.106371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.108732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.117907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.118279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.118462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.118496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.118516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.184 [2024-07-23 01:51:06.118675] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.184 [2024-07-23 01:51:06.118863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.184 [2024-07-23 01:51:06.118888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.184 [2024-07-23 01:51:06.118904] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.184 [2024-07-23 01:51:06.121235] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.184 [2024-07-23 01:51:06.130417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.184 [2024-07-23 01:51:06.130882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.131114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.184 [2024-07-23 01:51:06.131163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.184 [2024-07-23 01:51:06.131182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.131348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.131500] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.131524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.131541] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.133830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.142862] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.143257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.143444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.143474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.143492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.143683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.143821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.143844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.143858] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.146227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.155437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.155816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.156011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.156040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.156063] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.156247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.156399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.156424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.156439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.158860] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.168049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.168461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.168706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.168734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.168750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.168866] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.169040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.169065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.169081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.171438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.180540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.180908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.181119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.181145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.181162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.181325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.181494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.181519] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.181535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.183863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.193154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.193510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.193709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.193740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.193758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.193929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.194081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.194106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.194122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.196378] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.205654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.206020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.206283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.206312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.206330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.206531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.206734] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.206769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.206786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.209011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.218225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.218519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.218727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.218777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.218796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.218962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.219133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.219157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.219173] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.221430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.230710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.231148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.231423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.231452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.231470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.185 [2024-07-23 01:51:06.231627] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.185 [2024-07-23 01:51:06.231828] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.185 [2024-07-23 01:51:06.231853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.185 [2024-07-23 01:51:06.231869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.185 [2024-07-23 01:51:06.234239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.185 [2024-07-23 01:51:06.243184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.185 [2024-07-23 01:51:06.243550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.243810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.185 [2024-07-23 01:51:06.243838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.185 [2024-07-23 01:51:06.243855] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.186 [2024-07-23 01:51:06.243997] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.186 [2024-07-23 01:51:06.244151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.186 [2024-07-23 01:51:06.244176] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.186 [2024-07-23 01:51:06.244192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.186 [2024-07-23 01:51:06.246480] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.186 [2024-07-23 01:51:06.255747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.186 [2024-07-23 01:51:06.256163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.256417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.256446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.186 [2024-07-23 01:51:06.256464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.186 [2024-07-23 01:51:06.256594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.186 [2024-07-23 01:51:06.256747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.186 [2024-07-23 01:51:06.256772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.186 [2024-07-23 01:51:06.256788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.186 [2024-07-23 01:51:06.259156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.186 [2024-07-23 01:51:06.268124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.186 [2024-07-23 01:51:06.268591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.268784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.268813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.186 [2024-07-23 01:51:06.268837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.186 [2024-07-23 01:51:06.269038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.186 [2024-07-23 01:51:06.269208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.186 [2024-07-23 01:51:06.269238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.186 [2024-07-23 01:51:06.269255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.186 [2024-07-23 01:51:06.271471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.186 [2024-07-23 01:51:06.280708] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.186 [2024-07-23 01:51:06.281109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.281311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.186 [2024-07-23 01:51:06.281340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.186 [2024-07-23 01:51:06.281357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.445 [2024-07-23 01:51:06.281469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.445 [2024-07-23 01:51:06.281688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.445 [2024-07-23 01:51:06.281714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.445 [2024-07-23 01:51:06.281730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.445 [2024-07-23 01:51:06.284166] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.445 [2024-07-23 01:51:06.293265] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.445 [2024-07-23 01:51:06.293631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.445 [2024-07-23 01:51:06.293867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.445 [2024-07-23 01:51:06.293917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.445 [2024-07-23 01:51:06.293935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.445 [2024-07-23 01:51:06.294083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.445 [2024-07-23 01:51:06.294270] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.445 [2024-07-23 01:51:06.294294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.445 [2024-07-23 01:51:06.294310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.445 [2024-07-23 01:51:06.296822] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.445 [2024-07-23 01:51:06.306024] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.445 [2024-07-23 01:51:06.306453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.445 [2024-07-23 01:51:06.306687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.445 [2024-07-23 01:51:06.306718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.445 [2024-07-23 01:51:06.306737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.445 [2024-07-23 01:51:06.306885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.445 [2024-07-23 01:51:06.307036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.445 [2024-07-23 01:51:06.307061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.445 [2024-07-23 01:51:06.307091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.445 [2024-07-23 01:51:06.309353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.445 [2024-07-23 01:51:06.318569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.318954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.319182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.319232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.319250] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.319380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.319514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.319538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.319554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.321911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.331140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.331538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.331736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.331766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.331784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.331986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.332157] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.332181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.332197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.334509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.343669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.344110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.344447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.344498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.344516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.344634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.344768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.344792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.344808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.347183] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.356304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.356721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.356919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.356948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.356966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.357113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.357229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.357252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.357268] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.359663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.369123] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.369535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.369746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.369775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.369792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.369957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.370145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.370169] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.370185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.372444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.381619] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.381982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.382152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.382178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.382194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.382406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.382595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.382629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.382647] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.385003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.394175] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.394684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.394975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.395025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.395042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.395153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.395323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.395347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.395363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.397758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.406715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.407060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.407303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.407351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.407369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.407534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.407767] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.407792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.407808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.446 [2024-07-23 01:51:06.410030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.446 [2024-07-23 01:51:06.419481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.446 [2024-07-23 01:51:06.419948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.420242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.446 [2024-07-23 01:51:06.420268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.446 [2024-07-23 01:51:06.420284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.446 [2024-07-23 01:51:06.420491] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.446 [2024-07-23 01:51:06.420674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.446 [2024-07-23 01:51:06.420699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.446 [2024-07-23 01:51:06.420715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.423231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.432111] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.432484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.432696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.432727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.432745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.432910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.433128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.433152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.433168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.435570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.444792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.445175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.445443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.445472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.445490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.445666] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.445837] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.445861] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.445877] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.448303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.457587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.457942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.458202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.458291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.458310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.458459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.458659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.458684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.458700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.461193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.470039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.470400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.470626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.470658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.470675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.470850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.471036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.471066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.471082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.473522] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.482486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.482820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.483129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.483182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.483199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.483364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.483571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.483595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.483610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.486154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.494732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.495105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.495431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.495484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.495502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.495681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.495870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.495894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.495910] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.498203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.507170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.507513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.507693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.507726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.507749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.507881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.508050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.508075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.508091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.510458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.519695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.520141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.520411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.520442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.520460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.520592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.520824] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.447 [2024-07-23 01:51:06.520849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.447 [2024-07-23 01:51:06.520865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.447 [2024-07-23 01:51:06.523243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.447 [2024-07-23 01:51:06.532350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.447 [2024-07-23 01:51:06.532742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.532951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.447 [2024-07-23 01:51:06.532996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.447 [2024-07-23 01:51:06.533015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.447 [2024-07-23 01:51:06.533127] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.447 [2024-07-23 01:51:06.533296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.448 [2024-07-23 01:51:06.533320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.448 [2024-07-23 01:51:06.533336] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.448 [2024-07-23 01:51:06.535558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.544966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.545293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.545510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.545540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.545558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.545723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.545876] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.545902] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.545918] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.548145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.557688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.558079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.558313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.558362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.558380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.558547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.558695] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.558720] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.558737] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.561234] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.570239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.570634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.570799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.570829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.570847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.570959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.571128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.571152] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.571168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.573598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.582847] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.583255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.583476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.583526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.583545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.583726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.583902] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.583928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.583944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.586424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.595347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.595712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.595915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.595949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.595989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.596190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.596397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.596422] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.596439] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.598763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.607985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.608327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.608516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.608545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.608563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.608722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.608874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.608899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.608916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.611213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.620446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.620895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.621095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.621125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.621143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.621273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.621423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.621452] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.621469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.623792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.633127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.633677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.633870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.633899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.633918] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.634103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.634254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.634280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.634296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.636695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.645524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.645908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.646103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.646133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.646150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.646334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.646449] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.646472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.646488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.648778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.658124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.658501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.658686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.658728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.658743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.658915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.659094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.659119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.659141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.661420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.670879] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.671276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.671454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.671482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.671500] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.671642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.671848] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.671872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.671889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.674116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.683553] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.683940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.684179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.684229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.684249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.684433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.684584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.684609] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.684640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.687047] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.696360] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.696728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.696949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.696979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.696997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.697164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.697352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.697377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.697394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.709 [2024-07-23 01:51:06.699498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.709 [2024-07-23 01:51:06.708881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.709 [2024-07-23 01:51:06.709317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.709554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-23 01:51:06.709584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.709 [2024-07-23 01:51:06.709602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.709 [2024-07-23 01:51:06.709740] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.709 [2024-07-23 01:51:06.709891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.709 [2024-07-23 01:51:06.709915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.709 [2024-07-23 01:51:06.709931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.712102] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.721339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.721660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.721870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.721898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.721916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.722082] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.722233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.722258] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.722274] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.724657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.733806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.734169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.734432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.734480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.734498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.734695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.734902] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.734927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.734943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.737295] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.746435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.746851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.747111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.747141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.747159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.747343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.747512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.747538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.747554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.749601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.759043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.759409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.759608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.759645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.759677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.759856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.759997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.760023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.760039] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.762409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.771841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.772207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.772385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.772432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.772451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.772703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.772857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.772883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.772899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.775217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.784447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.784875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.785159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.785211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.785230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.785378] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.785547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.785571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.785586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.787823] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.710 [2024-07-23 01:51:06.797173] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.710 [2024-07-23 01:51:06.797674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.797865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-23 01:51:06.797894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.710 [2024-07-23 01:51:06.797911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.710 [2024-07-23 01:51:06.798060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.710 [2024-07-23 01:51:06.798211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.710 [2024-07-23 01:51:06.798236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.710 [2024-07-23 01:51:06.798253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.710 [2024-07-23 01:51:06.800443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.809715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.810121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.810398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.810425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.991 [2024-07-23 01:51:06.810442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.991 [2024-07-23 01:51:06.810650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.991 [2024-07-23 01:51:06.810838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.991 [2024-07-23 01:51:06.810864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.991 [2024-07-23 01:51:06.810880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.991 [2024-07-23 01:51:06.813200] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.822472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.822847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.823064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.823099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.991 [2024-07-23 01:51:06.823119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.991 [2024-07-23 01:51:06.823267] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.991 [2024-07-23 01:51:06.823454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.991 [2024-07-23 01:51:06.823480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.991 [2024-07-23 01:51:06.823496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.991 [2024-07-23 01:51:06.825956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.835201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.835539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.835732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.835764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.991 [2024-07-23 01:51:06.835783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.991 [2024-07-23 01:51:06.835932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.991 [2024-07-23 01:51:06.836072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.991 [2024-07-23 01:51:06.836096] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.991 [2024-07-23 01:51:06.836113] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.991 [2024-07-23 01:51:06.838249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.847849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.848362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.848547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.848577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.991 [2024-07-23 01:51:06.848594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.991 [2024-07-23 01:51:06.848805] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.991 [2024-07-23 01:51:06.849011] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.991 [2024-07-23 01:51:06.849037] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.991 [2024-07-23 01:51:06.849054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.991 [2024-07-23 01:51:06.851334] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.860548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.860928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.861141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.861167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.991 [2024-07-23 01:51:06.861188] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.991 [2024-07-23 01:51:06.861352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.991 [2024-07-23 01:51:06.861522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.991 [2024-07-23 01:51:06.861548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.991 [2024-07-23 01:51:06.861564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.991 [2024-07-23 01:51:06.863700] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.991 [2024-07-23 01:51:06.873102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.991 [2024-07-23 01:51:06.873460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.873768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-23 01:51:06.873818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.873837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.874058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.874245] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.874269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.874285] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.876469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.885793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.886158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.886395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.886424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.886454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.886663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.886833] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.886858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.886874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.889323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.898421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.898783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.899080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.899133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.899150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.899303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.899436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.899460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.899476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.901876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.911170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.911578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.911759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.911785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.911801] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.911950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.912115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.912138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.912154] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.914486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.923764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.924268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.924486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.924518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.924536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.924729] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.924882] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.924907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.924923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.927203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.936205] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.936514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.936729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.936759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.936778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.936980] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.937159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.937184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.937201] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.939409] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.948994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.949372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.949518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.949546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.949563] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.949713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.949870] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.949895] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.949912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.952205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.961238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.961639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.961791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.961819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.961836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.961957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.962128] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.962154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.962170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.964484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.973758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.974164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.974422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.974469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.992 [2024-07-23 01:51:06.974486] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.992 [2024-07-23 01:51:06.974681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.992 [2024-07-23 01:51:06.974853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.992 [2024-07-23 01:51:06.974882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.992 [2024-07-23 01:51:06.974906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.992 [2024-07-23 01:51:06.977363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.992 [2024-07-23 01:51:06.986363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.992 [2024-07-23 01:51:06.986658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-23 01:51:06.986868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:06.986898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:06.986916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:06.987083] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:06.987234] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:06.987259] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:06.987275] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:06.989754] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:06.999009] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:06.999406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:06.999626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:06.999654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:06.999671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:06.999869] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.000070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.000094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.000112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.002488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:07.011580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:07.011983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.012175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.012205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:07.012223] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:07.012372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.012522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.012547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.012575] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.014913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:07.024157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:07.024484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.024638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.024674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:07.024690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:07.024823] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.024983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.025007] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.025022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.027438] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:07.036741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:07.037141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.037382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.037409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:07.037426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:07.037541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.037700] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.037722] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.037736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.040246] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:07.049364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:07.049683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.049831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.049858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:07.049874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:07.050041] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.050211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.050237] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.050254] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.052726] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:53.993 [2024-07-23 01:51:07.062131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:53.993 [2024-07-23 01:51:07.062525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.062706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-23 01:51:07.062733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:53.993 [2024-07-23 01:51:07.062749] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:53.993 [2024-07-23 01:51:07.062882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:53.993 [2024-07-23 01:51:07.063048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:53.993 [2024-07-23 01:51:07.063071] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:53.993 [2024-07-23 01:51:07.063087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:53.993 [2024-07-23 01:51:07.065408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.256 [2024-07-23 01:51:07.074884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.256 [2024-07-23 01:51:07.075254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.256 [2024-07-23 01:51:07.075418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.256 [2024-07-23 01:51:07.075444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.256 [2024-07-23 01:51:07.075460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.256 [2024-07-23 01:51:07.075608] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.256 [2024-07-23 01:51:07.075736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.075758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.075772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.078121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.087305] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.087741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.087896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.087954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.087972] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.088140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.088329] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.088354] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.088370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.090875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.099964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.100385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.100631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.100687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.100704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.100854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.101020] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.101045] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.101061] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.103256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.112680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.113040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.113249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.113296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.113314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.113534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.113741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.113765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.113780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.116122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.125180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.125536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.125741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.125768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.125785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.126000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.126116] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.126140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.126156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.128645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.137849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.138291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.138477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.138505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.138523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.138756] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.138951] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.138977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.138993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.141280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3902963 Killed "${NVMF_APP[@]}" "$@" 00:29:54.257 01:51:07 -- host/bdevperf.sh@36 -- # tgt_init 00:29:54.257 01:51:07 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:54.257 01:51:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:54.257 01:51:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:54.257 01:51:07 -- common/autotest_common.sh@10 -- # set +x 00:29:54.257 01:51:07 -- nvmf/common.sh@469 -- # nvmfpid=3904184 00:29:54.257 01:51:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.257 01:51:07 -- nvmf/common.sh@470 -- # waitforlisten 3904184 00:29:54.257 01:51:07 -- common/autotest_common.sh@819 -- # '[' -z 3904184 ']' 00:29:54.257 01:51:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.257 01:51:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:54.257 01:51:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.257 01:51:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:54.257 01:51:07 -- common/autotest_common.sh@10 -- # set +x 00:29:54.257 [2024-07-23 01:51:07.150478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.150800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.150970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.150997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.151014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.151202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.151390] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.151414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.151430] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.153878] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.163146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.163556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.163749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.163781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.257 [2024-07-23 01:51:07.163798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.257 [2024-07-23 01:51:07.163984] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.257 [2024-07-23 01:51:07.164131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.257 [2024-07-23 01:51:07.164154] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.257 [2024-07-23 01:51:07.164169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.257 [2024-07-23 01:51:07.166551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.257 [2024-07-23 01:51:07.175520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.257 [2024-07-23 01:51:07.175865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.176635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.257 [2024-07-23 01:51:07.176669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.176687] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.176839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.177021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.177042] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.177056] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.179239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.187731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.188105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.188302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.188343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.188361] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.188570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.188770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.188793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.188809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.190837] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.192692] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:54.258 [2024-07-23 01:51:07.192754] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.258 [2024-07-23 01:51:07.200021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.200420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.200637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.200664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.200681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.200813] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.201005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.201024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.201038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.203129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.212120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.212483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.212689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.212716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.212732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.212914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.213039] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.213059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.213072] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.215152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.224323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.224748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.224928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.224957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.224973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.225154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.225299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.225320] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.225333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.227642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.258 [2024-07-23 01:51:07.236702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.237092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.237339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.237368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.237386] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.237552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.237723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.237745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.237761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.240133] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.249262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.249711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.249868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.249894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.249930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.250096] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.250302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.250326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.250342] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.252484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.261804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.262177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.262379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.262406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.262425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.262590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.262810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.258 [2024-07-23 01:51:07.262833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.258 [2024-07-23 01:51:07.262848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.258 [2024-07-23 01:51:07.265186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.258 [2024-07-23 01:51:07.265296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.258 [2024-07-23 01:51:07.274210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.258 [2024-07-23 01:51:07.274791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.275005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.258 [2024-07-23 01:51:07.275049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.258 [2024-07-23 01:51:07.275071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.258 [2024-07-23 01:51:07.275262] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.258 [2024-07-23 01:51:07.275437] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.275462] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.275480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.277812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.286823] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.287278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.287495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.287522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.287540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.287733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.287949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.287973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.287991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.290346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.299412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.299849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.300034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.300061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.300077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.300311] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.300499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.300524] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.300540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.302843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.311941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.312396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.312626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.312671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.312696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.312863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.313050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.313075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.313092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.315487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.324531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.325223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.325464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.325507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.325528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.325712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.325871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.325924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.325944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.328388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.337139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.337530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.337751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.337778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.337796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.337975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.338137] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.338162] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.338179] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.340587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.259 [2024-07-23 01:51:07.349556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.259 [2024-07-23 01:51:07.349980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.350169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.259 [2024-07-23 01:51:07.350196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.259 [2024-07-23 01:51:07.350213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.259 [2024-07-23 01:51:07.350389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.259 [2024-07-23 01:51:07.350541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.259 [2024-07-23 01:51:07.350565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.259 [2024-07-23 01:51:07.350581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.259 [2024-07-23 01:51:07.352853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.360100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:54.519 [2024-07-23 01:51:07.360242] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.519 [2024-07-23 01:51:07.360261] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.519 [2024-07-23 01:51:07.360275] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.519 [2024-07-23 01:51:07.360336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.519 [2024-07-23 01:51:07.360375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.519 [2024-07-23 01:51:07.360378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.519 [2024-07-23 01:51:07.361783] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.362217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.362403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.362429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.362446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.362580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.362743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.362765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.362781] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.365034] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.374041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.374559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.374741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.374769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.374789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.374948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.375175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.375197] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.375213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.377444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.386494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.387003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.387259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.387300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.387319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.387551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.387759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.387782] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.387799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.389953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.398876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.399469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.399673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.399701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.399720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.399896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.400089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.400110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.400126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.402209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.411218] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.411748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.411955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.411983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.412002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.412177] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.412325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.412347] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.412363] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.414411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.423691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.424241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.424417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.424444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.424464] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.424679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.424853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.519 [2024-07-23 01:51:07.424876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.519 [2024-07-23 01:51:07.424902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.519 [2024-07-23 01:51:07.426806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.519 [2024-07-23 01:51:07.436363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.519 [2024-07-23 01:51:07.436842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.437027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.519 [2024-07-23 01:51:07.437053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.519 [2024-07-23 01:51:07.437072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.519 [2024-07-23 01:51:07.437211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.519 [2024-07-23 01:51:07.437391] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.437412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.437428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.439553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.448715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.449006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.449203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.449232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.449249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.449368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.449530] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.449551] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.449565] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.451510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.460933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.461329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.461510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.461538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.461555] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.461764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.461913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.461949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.461963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.463945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.473208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.473507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.473713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.473742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.473758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.473891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.474022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.474043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.474057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.476220] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.485412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.485779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.485920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.485947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.485964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.486128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.486274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.486295] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.486310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.488406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.497906] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.498267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.498412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.498439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.498461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.498592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.498733] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.498756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.498770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.500683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.510228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.510558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.510738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.510766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.510782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.510929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.511076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.511097] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.511111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.513237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.522425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.522759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.522908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.522934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.522950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.523130] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.523276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.523297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.523310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.525343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.534769] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.535106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.535247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.535274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.535291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.535443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.520 [2024-07-23 01:51:07.535663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.520 [2024-07-23 01:51:07.535685] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.520 [2024-07-23 01:51:07.535700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.520 [2024-07-23 01:51:07.537572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.520 [2024-07-23 01:51:07.547166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.520 [2024-07-23 01:51:07.547432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.547638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.520 [2024-07-23 01:51:07.547666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.520 [2024-07-23 01:51:07.547682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.520 [2024-07-23 01:51:07.547863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.548040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.548061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.548074] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.550085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.521 [2024-07-23 01:51:07.559425] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.521 [2024-07-23 01:51:07.559773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.559949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.559976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.521 [2024-07-23 01:51:07.559996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.521 [2024-07-23 01:51:07.560128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.560272] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.560293] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.560307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.562343] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.521 [2024-07-23 01:51:07.571986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.521 [2024-07-23 01:51:07.572316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.572487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.572514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.521 [2024-07-23 01:51:07.572531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.521 [2024-07-23 01:51:07.572691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.572847] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.572869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.572884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.575039] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.521 [2024-07-23 01:51:07.584320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.521 [2024-07-23 01:51:07.584673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.584845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.584871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.521 [2024-07-23 01:51:07.584887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.521 [2024-07-23 01:51:07.585020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.585212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.585233] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.585247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.587326] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.521 [2024-07-23 01:51:07.596736] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.521 [2024-07-23 01:51:07.597049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.597219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.597246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.521 [2024-07-23 01:51:07.597262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.521 [2024-07-23 01:51:07.597395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.597556] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.597577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.597605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.599778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.521 [2024-07-23 01:51:07.609133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.521 [2024-07-23 01:51:07.609477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.609670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.521 [2024-07-23 01:51:07.609698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.521 [2024-07-23 01:51:07.609715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.521 [2024-07-23 01:51:07.609862] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.521 [2024-07-23 01:51:07.610009] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.521 [2024-07-23 01:51:07.610035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.521 [2024-07-23 01:51:07.610049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.521 [2024-07-23 01:51:07.612058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.780 [2024-07-23 01:51:07.621366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.780 [2024-07-23 01:51:07.621685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.780 [2024-07-23 01:51:07.621883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.780 [2024-07-23 01:51:07.621919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.780 [2024-07-23 01:51:07.621935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.780 [2024-07-23 01:51:07.622114] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.780 [2024-07-23 01:51:07.622296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.780 [2024-07-23 01:51:07.622319] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.780 [2024-07-23 01:51:07.622334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.780 [2024-07-23 01:51:07.624296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.780 [2024-07-23 01:51:07.633787] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.780 [2024-07-23 01:51:07.634163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.634323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.634347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.634363] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.634556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.634728] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.634751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.634766] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.636694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.646047] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.646403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.646600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.646634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.646651] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.646817] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.646998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.647019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.647037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.649089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.658246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.658622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.658789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.658815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.658831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.659010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.659187] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.659208] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.659221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.661164] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.670637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.670982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.671127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.671153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.671169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.671335] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.671528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.671549] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.671563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.673491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.682979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.683338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.683517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.683543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.683560] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.683759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.683944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.683965] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.683985] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.686092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.695161] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.695500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.695651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.695679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.695695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.695829] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.695978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.695999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.696013] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.698016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.707408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.707747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.707941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.707967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.707983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.708181] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.708371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.708392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.708406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.710320] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.719639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.720009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.720207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.720233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.720249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.720445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.781 [2024-07-23 01:51:07.720605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.781 [2024-07-23 01:51:07.720650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.781 [2024-07-23 01:51:07.720665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.781 [2024-07-23 01:51:07.722666] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.781 [2024-07-23 01:51:07.731973] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.781 [2024-07-23 01:51:07.732291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.732491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.781 [2024-07-23 01:51:07.732517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.781 [2024-07-23 01:51:07.732533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.781 [2024-07-23 01:51:07.732677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.732875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.732897] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.732924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.734863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.744195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.744510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.744707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.744735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.744750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.744946] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.745091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.745113] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.745127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.747239] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.756502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.756878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.757051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.757077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.757093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.757257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.757417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.757438] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.757452] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.759408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.768836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.769181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.769385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.769411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.769427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.769561] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.769750] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.769772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.769787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.771875] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.781091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.781505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.781654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.781682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.781698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.781847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.782008] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.782030] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.782043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.784287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.793413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.793719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.793886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.793925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.793941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.794120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.794281] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.794303] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.794321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.796324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.805875] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.806215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.806418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.806444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.806461] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.806665] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.806816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.806838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.806853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.809101] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.818108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.818394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.818577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.818603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.818629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.818811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.818986] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.819008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.819021] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.820981] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.830500] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.830842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.831044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.831071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.782 [2024-07-23 01:51:07.831087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.782 [2024-07-23 01:51:07.831235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.782 [2024-07-23 01:51:07.831396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.782 [2024-07-23 01:51:07.831424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.782 [2024-07-23 01:51:07.831438] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.782 [2024-07-23 01:51:07.833430] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.782 [2024-07-23 01:51:07.842745] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.782 [2024-07-23 01:51:07.843102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.843275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.782 [2024-07-23 01:51:07.843301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.783 [2024-07-23 01:51:07.843322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.783 [2024-07-23 01:51:07.843472] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.783 [2024-07-23 01:51:07.843642] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.783 [2024-07-23 01:51:07.843664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.783 [2024-07-23 01:51:07.843679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.783 [2024-07-23 01:51:07.845830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.783 [2024-07-23 01:51:07.854843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.783 [2024-07-23 01:51:07.855264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.783 [2024-07-23 01:51:07.855429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.783 [2024-07-23 01:51:07.855455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.783 [2024-07-23 01:51:07.855471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.783 [2024-07-23 01:51:07.855628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.783 [2024-07-23 01:51:07.855763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.783 [2024-07-23 01:51:07.855785] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.783 [2024-07-23 01:51:07.855799] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.783 [2024-07-23 01:51:07.857849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:54.783 [2024-07-23 01:51:07.867262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:54.783 [2024-07-23 01:51:07.867658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.783 [2024-07-23 01:51:07.867799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.783 [2024-07-23 01:51:07.867826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:54.783 [2024-07-23 01:51:07.867842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:54.783 [2024-07-23 01:51:07.868020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:54.783 [2024-07-23 01:51:07.868211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:54.783 [2024-07-23 01:51:07.868232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:54.783 [2024-07-23 01:51:07.868246] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:54.783 [2024-07-23 01:51:07.870401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.042 [2024-07-23 01:51:07.879742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.042 [2024-07-23 01:51:07.880047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.042 [2024-07-23 01:51:07.880209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.042 [2024-07-23 01:51:07.880235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.042 [2024-07-23 01:51:07.880251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.042 [2024-07-23 01:51:07.880403] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.042 [2024-07-23 01:51:07.880550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.880571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.880585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.882690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.892185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.892540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.892741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.892768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.892785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.892886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.893089] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.893109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.893123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.895001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.904481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.904827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.905033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.905059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.905075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.905175] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.905385] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.905406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.905419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.907504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.917041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.917366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.917532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.917559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.917575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.917736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.917857] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.917879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.917893] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.919950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.929312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.929622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.929792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.929818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.929834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.929998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.930145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.930166] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.930180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.932232] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.941724] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.942106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.942279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.942306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.942322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.942470] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.942656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.942678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.942692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.944772] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.953888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.954251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.954474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.954502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.954519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.954628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.954843] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.954865] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.954880] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.957076] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.966089] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.966502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.966671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.966698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.966715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.966898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.967030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.967051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.967064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.969210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.978421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.978763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.978937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.978963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.978979] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.043 [2024-07-23 01:51:07.979160] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.043 [2024-07-23 01:51:07.979352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.043 [2024-07-23 01:51:07.979373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.043 [2024-07-23 01:51:07.979387] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.043 [2024-07-23 01:51:07.981281] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.043 [2024-07-23 01:51:07.990568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.043 [2024-07-23 01:51:07.990895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.991076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.043 [2024-07-23 01:51:07.991102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.043 [2024-07-23 01:51:07.991118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:07.991266] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:07.991460] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:07.991481] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:07.991503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:07.993635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.002721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.003057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.003249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.003275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.003292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.003466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.003612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.003655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.003670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.006003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.014989] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.015268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.015436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.015462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.015479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.015669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.015819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.015840] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.015855] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.017827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.027403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.027735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.027868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.027896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.027923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.028039] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.028215] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.028236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.028255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.030420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.039741] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.040088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.040254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.040281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.040297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.040462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.040608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.040650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.040665] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.042710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.051987] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.052321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.052484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.052511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.052527] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.052653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.052836] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.052857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.052871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.055147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.064204] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.064564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.064732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.064758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.064773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.064999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.065098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.065118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.065132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.067280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.076476] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.076826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.076969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.076996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.077012] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.077189] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.077319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.077340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.077353] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.079533] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.088845] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.089180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.089389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.089421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.044 [2024-07-23 01:51:08.089437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.044 [2024-07-23 01:51:08.089646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.044 [2024-07-23 01:51:08.089798] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.044 [2024-07-23 01:51:08.089819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.044 [2024-07-23 01:51:08.089833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.044 [2024-07-23 01:51:08.092008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.044 [2024-07-23 01:51:08.101229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.044 [2024-07-23 01:51:08.101581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.101729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.044 [2024-07-23 01:51:08.101757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.045 [2024-07-23 01:51:08.101774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.045 [2024-07-23 01:51:08.101938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.045 [2024-07-23 01:51:08.102082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.045 [2024-07-23 01:51:08.102103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.045 [2024-07-23 01:51:08.102117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.045 [2024-07-23 01:51:08.104189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.045 [2024-07-23 01:51:08.113472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.045 [2024-07-23 01:51:08.113825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.114025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.114051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.045 [2024-07-23 01:51:08.114068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.045 [2024-07-23 01:51:08.114279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.045 [2024-07-23 01:51:08.114432] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.045 [2024-07-23 01:51:08.114453] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.045 [2024-07-23 01:51:08.114467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.045 [2024-07-23 01:51:08.116584] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.045 [2024-07-23 01:51:08.125706] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.045 [2024-07-23 01:51:08.126021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.126188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.126215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.045 [2024-07-23 01:51:08.126231] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.045 [2024-07-23 01:51:08.126348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.045 [2024-07-23 01:51:08.126544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.045 [2024-07-23 01:51:08.126565] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.045 [2024-07-23 01:51:08.126579] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.045 [2024-07-23 01:51:08.128758] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.045 [2024-07-23 01:51:08.138097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.045 [2024-07-23 01:51:08.138412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.138588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.045 [2024-07-23 01:51:08.138621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.045 [2024-07-23 01:51:08.138639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.045 [2024-07-23 01:51:08.138821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.045 [2024-07-23 01:51:08.138965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.045 [2024-07-23 01:51:08.138987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.045 [2024-07-23 01:51:08.139001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.303 [2024-07-23 01:51:08.140990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.303 01:51:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:55.303 01:51:08 -- common/autotest_common.sh@852 -- # return 0 00:29:55.303 [2024-07-23 01:51:08.150400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.303 01:51:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:55.303 01:51:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:55.303 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.303 [2024-07-23 01:51:08.150739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.150873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.150899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.303 [2024-07-23 01:51:08.150916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.303 [2024-07-23 01:51:08.151108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.303 [2024-07-23 01:51:08.151307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.303 [2024-07-23 01:51:08.151330] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.303 [2024-07-23 01:51:08.151344] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.303 [2024-07-23 01:51:08.153444] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.303 [2024-07-23 01:51:08.162657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.303 [2024-07-23 01:51:08.163023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.163184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.163210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.303 [2024-07-23 01:51:08.163226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.303 [2024-07-23 01:51:08.163391] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.303 [2024-07-23 01:51:08.163552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.303 [2024-07-23 01:51:08.163573] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.303 [2024-07-23 01:51:08.163587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.303 [2024-07-23 01:51:08.165652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.303 01:51:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.303 01:51:08 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.303 01:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.303 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.303 [2024-07-23 01:51:08.173140] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.303 [2024-07-23 01:51:08.174939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.303 [2024-07-23 01:51:08.175350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.175525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.303 [2024-07-23 01:51:08.175553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.303 [2024-07-23 01:51:08.175569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.304 [2024-07-23 01:51:08.175711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.304 [2024-07-23 01:51:08.175912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.304 [2024-07-23 01:51:08.175943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.304 [2024-07-23 01:51:08.175962] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.304 [2024-07-23 01:51:08.178182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.304 01:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.304 01:51:08 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:55.304 01:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.304 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.304 [2024-07-23 01:51:08.187426] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.304 [2024-07-23 01:51:08.187733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.187890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.187916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.304 [2024-07-23 01:51:08.187932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.304 [2024-07-23 01:51:08.188081] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.304 [2024-07-23 01:51:08.188252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.304 [2024-07-23 01:51:08.188273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.304 [2024-07-23 01:51:08.188287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.304 [2024-07-23 01:51:08.190185] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.304 [2024-07-23 01:51:08.199734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.304 [2024-07-23 01:51:08.200304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.200478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.200504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.304 [2024-07-23 01:51:08.200523] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.304 [2024-07-23 01:51:08.200673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.304 [2024-07-23 01:51:08.200845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.304 [2024-07-23 01:51:08.200868] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.304 [2024-07-23 01:51:08.200884] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.304 [2024-07-23 01:51:08.202967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.304 [2024-07-23 01:51:08.212225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.304 Malloc0 00:29:55.304 [2024-07-23 01:51:08.212580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.212786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.212813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.304 [2024-07-23 01:51:08.212832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.304 01:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.304 [2024-07-23 01:51:08.213033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.304 01:51:08 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.304 [2024-07-23 01:51:08.213192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.304 [2024-07-23 01:51:08.213215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.304 [2024-07-23 01:51:08.213232] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.304 01:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.304 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.304 [2024-07-23 01:51:08.215545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.304 01:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.304 01:51:08 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.304 01:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.304 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.304 [2024-07-23 01:51:08.224642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.304 [2024-07-23 01:51:08.224944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.225138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.304 [2024-07-23 01:51:08.225166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1535030 with addr=10.0.0.2, port=4420 00:29:55.304 [2024-07-23 01:51:08.225182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1535030 is same with the state(5) to be set 00:29:55.304 [2024-07-23 01:51:08.225347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1535030 (9): Bad file descriptor 00:29:55.304 [2024-07-23 01:51:08.225524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.304 [2024-07-23 01:51:08.225555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.304 [2024-07-23 01:51:08.225569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.304 [2024-07-23 01:51:08.227501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.304 01:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.304 01:51:08 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.304 01:51:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:55.304 01:51:08 -- common/autotest_common.sh@10 -- # set +x 00:29:55.304 [2024-07-23 01:51:08.232387] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.304 01:51:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:55.304 01:51:08 -- host/bdevperf.sh@38 -- # wait 3903342 00:29:55.304 [2024-07-23 01:51:08.236969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.304 [2024-07-23 01:51:08.268435] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:05.268 00:30:05.268 Latency(us) 00:30:05.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:05.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:05.268 Verification LBA range: start 0x0 length 0x4000 00:30:05.268 Nvme1n1 : 15.01 9350.84 36.53 15581.14 0.00 5119.47 855.61 20777.34 00:30:05.268 =================================================================================================================== 00:30:05.268 Total : 9350.84 36.53 15581.14 0.00 5119.47 855.61 20777.34 00:30:05.268 01:51:16 -- host/bdevperf.sh@39 -- # sync 00:30:05.268 01:51:16 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.268 01:51:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.268 01:51:16 -- common/autotest_common.sh@10 -- # set +x 00:30:05.268 01:51:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.268 01:51:16 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:05.268 01:51:16 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:05.268 01:51:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:05.268 01:51:16 -- nvmf/common.sh@116 -- # sync 00:30:05.268 01:51:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:05.268 01:51:16 -- nvmf/common.sh@119 -- # set +e 00:30:05.268 01:51:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:05.268 01:51:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:05.268 rmmod nvme_tcp 00:30:05.268 rmmod nvme_fabrics 00:30:05.268 rmmod nvme_keyring 00:30:05.268 01:51:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:05.268 01:51:16 -- nvmf/common.sh@123 -- # set -e 00:30:05.268 01:51:16 -- nvmf/common.sh@124 -- # return 0 00:30:05.268 01:51:16 -- nvmf/common.sh@477 -- # '[' -n 3904184 ']' 00:30:05.268 01:51:16 -- nvmf/common.sh@478 -- # killprocess 3904184 00:30:05.268 01:51:16 -- common/autotest_common.sh@926 -- # '[' -z 3904184 ']' 00:30:05.268 01:51:16 -- common/autotest_common.sh@930 -- # kill -0 3904184 00:30:05.268 01:51:16 -- common/autotest_common.sh@931 -- # uname 00:30:05.268 01:51:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:05.268 01:51:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3904184 00:30:05.268 01:51:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:05.268 01:51:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:05.268 01:51:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3904184' 00:30:05.268 killing process with pid 3904184 00:30:05.268 01:51:17 -- common/autotest_common.sh@945 -- # kill 3904184 00:30:05.268 01:51:17 -- common/autotest_common.sh@950 -- # wait 3904184 00:30:05.268 01:51:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:05.268 01:51:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:05.268 01:51:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:05.268 01:51:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:05.268 01:51:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:05.268 01:51:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:05.268 01:51:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:05.268 01:51:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.205 01:51:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:06.205 00:30:06.205 real 0m23.241s 00:30:06.205 user 1m2.339s 00:30:06.205 sys 0m4.607s 00:30:06.205 01:51:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:06.205 01:51:19 -- common/autotest_common.sh@10 -- # set +x 00:30:06.205 ************************************ 00:30:06.205 END TEST nvmf_bdevperf 00:30:06.205 ************************************ 00:30:06.205 01:51:19 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:06.205 01:51:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:06.205 01:51:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:06.205 01:51:19 -- common/autotest_common.sh@10 -- # set +x 00:30:06.205 ************************************ 00:30:06.205 START TEST nvmf_target_disconnect 00:30:06.205 ************************************ 00:30:06.205 01:51:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:06.464 * Looking for test storage... 00:30:06.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:06.464 01:51:19 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.464 01:51:19 -- nvmf/common.sh@7 -- # uname -s 00:30:06.464 01:51:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.464 01:51:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.464 01:51:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.464 01:51:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.464 01:51:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:06.464 01:51:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:06.464 01:51:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.464 01:51:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:06.464 01:51:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.464 01:51:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:06.464 01:51:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.464 01:51:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:06.464 01:51:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.464 01:51:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:06.464 01:51:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:06.464 01:51:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.464 01:51:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.464 01:51:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.464 01:51:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.464 01:51:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.464 01:51:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.464 01:51:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.464 01:51:19 -- paths/export.sh@5 -- # export PATH 00:30:06.464 01:51:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.464 01:51:19 -- nvmf/common.sh@46 -- # : 0 00:30:06.464 01:51:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:06.464 01:51:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:06.464 01:51:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:06.464 01:51:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.465 01:51:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.465 01:51:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:06.465 01:51:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:06.465 01:51:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:06.465 01:51:19 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:06.465 01:51:19 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:06.465 01:51:19 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:06.465 01:51:19 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:06.465 01:51:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:06.465 01:51:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.465 01:51:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:06.465 01:51:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:06.465 01:51:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:06.465 01:51:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.465 01:51:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:06.465 01:51:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.465 01:51:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:06.465 01:51:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:06.465 01:51:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:06.465 01:51:19 -- common/autotest_common.sh@10 -- # set +x 00:30:08.366 01:51:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:08.366 01:51:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:08.366 01:51:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:08.366 01:51:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:08.366 01:51:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:08.366 01:51:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:08.366 01:51:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:08.366 01:51:21 -- nvmf/common.sh@294 -- # net_devs=() 00:30:08.366 01:51:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:08.366 01:51:21 -- nvmf/common.sh@295 -- # e810=() 00:30:08.366 01:51:21 -- nvmf/common.sh@295 -- # local -ga e810 00:30:08.366 01:51:21 -- nvmf/common.sh@296 -- # x722=() 00:30:08.366 01:51:21 -- nvmf/common.sh@296 -- # local -ga x722 00:30:08.366 01:51:21 -- nvmf/common.sh@297 -- # mlx=() 00:30:08.366 01:51:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:08.366 01:51:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:08.366 01:51:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:08.366 01:51:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:08.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:08.366 01:51:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:08.366 01:51:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:08.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:08.366 01:51:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:08.366 01:51:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.366 01:51:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.366 01:51:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:08.366 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:08.366 01:51:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:08.366 01:51:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:08.366 01:51:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:08.366 01:51:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:08.366 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:08.366 01:51:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:08.366 01:51:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:08.366 01:51:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:08.366 01:51:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:08.366 01:51:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:08.366 01:51:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:08.366 01:51:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:08.366 01:51:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:08.366 01:51:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:08.366 01:51:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:08.366 01:51:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:08.366 01:51:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:08.366 01:51:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:08.366 01:51:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:08.366 01:51:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:08.366 01:51:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:08.366 01:51:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:08.366 01:51:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:08.366 01:51:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:08.366 01:51:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:08.366 01:51:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:08.366 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:08.366 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:30:08.366 00:30:08.366 --- 10.0.0.2 ping statistics --- 00:30:08.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.366 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:30:08.366 01:51:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:08.366 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:08.366 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:30:08.366 00:30:08.366 --- 10.0.0.1 ping statistics --- 00:30:08.366 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:08.366 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:30:08.366 01:51:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:08.366 01:51:21 -- nvmf/common.sh@410 -- # return 0 00:30:08.366 01:51:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:08.366 01:51:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:08.366 01:51:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:08.366 01:51:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:08.366 01:51:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:08.366 01:51:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:08.366 01:51:21 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:08.366 01:51:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:08.366 01:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.366 01:51:21 -- common/autotest_common.sh@10 -- # set +x 00:30:08.366 ************************************ 00:30:08.366 START TEST nvmf_target_disconnect_tc1 00:30:08.366 ************************************ 00:30:08.366 01:51:21 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:08.366 01:51:21 -- host/target_disconnect.sh@32 -- # set +e 00:30:08.366 01:51:21 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:08.366 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.366 [2024-07-23 01:51:21.390458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.366 [2024-07-23 01:51:21.390743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.366 [2024-07-23 01:51:21.390774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x943510 with addr=10.0.0.2, port=4420 00:30:08.366 [2024-07-23 01:51:21.390803] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:08.366 [2024-07-23 01:51:21.390821] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:08.366 [2024-07-23 01:51:21.390834] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:08.366 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:08.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:08.366 Initializing NVMe Controllers 00:30:08.367 01:51:21 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:08.367 01:51:21 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:08.367 01:51:21 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:08.367 01:51:21 -- common/autotest_common.sh@1132 -- # return 0 00:30:08.367 01:51:21 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:08.367 01:51:21 -- host/target_disconnect.sh@41 -- # set -e 00:30:08.367 00:30:08.367 real 0m0.091s 00:30:08.367 user 0m0.037s 00:30:08.367 sys 0m0.054s 00:30:08.367 01:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:08.367 01:51:21 -- common/autotest_common.sh@10 -- # set +x 00:30:08.367 ************************************ 00:30:08.367 END TEST nvmf_target_disconnect_tc1 00:30:08.367 ************************************ 00:30:08.367 01:51:21 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:08.367 01:51:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:08.367 01:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:08.367 01:51:21 -- common/autotest_common.sh@10 -- # set +x 00:30:08.367 ************************************ 00:30:08.367 START TEST nvmf_target_disconnect_tc2 00:30:08.367 ************************************ 00:30:08.367 01:51:21 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:08.367 01:51:21 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:08.367 01:51:21 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:08.367 01:51:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:08.367 01:51:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:08.367 01:51:21 -- common/autotest_common.sh@10 -- # set +x 00:30:08.367 01:51:21 -- nvmf/common.sh@469 -- # nvmfpid=3907767 00:30:08.367 01:51:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:08.367 01:51:21 -- nvmf/common.sh@470 -- # waitforlisten 3907767 00:30:08.367 01:51:21 -- common/autotest_common.sh@819 -- # '[' -z 3907767 ']' 00:30:08.367 01:51:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:08.367 01:51:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:08.367 01:51:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:08.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:08.367 01:51:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:08.367 01:51:21 -- common/autotest_common.sh@10 -- # set +x 00:30:08.625 [2024-07-23 01:51:21.470988] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:08.625 [2024-07-23 01:51:21.471078] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:08.625 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.625 [2024-07-23 01:51:21.536875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:08.625 [2024-07-23 01:51:21.626193] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:08.625 [2024-07-23 01:51:21.626354] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:08.625 [2024-07-23 01:51:21.626380] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:08.625 [2024-07-23 01:51:21.626398] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:08.625 [2024-07-23 01:51:21.626498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:08.625 [2024-07-23 01:51:21.626563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:08.625 [2024-07-23 01:51:21.626644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:08.625 [2024-07-23 01:51:21.626653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:09.559 01:51:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:09.559 01:51:22 -- common/autotest_common.sh@852 -- # return 0 00:30:09.559 01:51:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:09.559 01:51:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 01:51:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.559 01:51:22 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 Malloc0 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 [2024-07-23 01:51:22.488044] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 [2024-07-23 01:51:22.516236] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.559 01:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:09.559 01:51:22 -- common/autotest_common.sh@10 -- # set +x 00:30:09.559 01:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:09.559 01:51:22 -- host/target_disconnect.sh@50 -- # reconnectpid=3907926 00:30:09.559 01:51:22 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:09.559 01:51:22 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:09.559 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.462 01:51:24 -- host/target_disconnect.sh@53 -- # kill -9 3907767 00:30:11.462 01:51:24 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:11.462 Read completed with error (sct=0, sc=8) 00:30:11.462 starting I/O failed 00:30:11.462 Read completed with error (sct=0, sc=8) 00:30:11.462 starting I/O failed 00:30:11.462 Read completed with error (sct=0, sc=8) 00:30:11.462 starting I/O failed 00:30:11.462 Read completed with error (sct=0, sc=8) 00:30:11.462 starting I/O failed 00:30:11.462 Read completed with error (sct=0, sc=8) 00:30:11.462 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 [2024-07-23 01:51:24.539726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 [2024-07-23 01:51:24.540024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 [2024-07-23 01:51:24.540346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Read completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.463 starting I/O failed 00:30:11.463 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Write completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 Read completed with error (sct=0, sc=8) 00:30:11.464 starting I/O failed 00:30:11.464 [2024-07-23 01:51:24.540639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.464 [2024-07-23 01:51:24.540810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.541176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.541528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.541747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.541919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.542245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.542573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.542776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.542921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.543314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.543710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.543875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.544009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.544355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.544690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.544856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.545024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.545379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.545696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.545869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.546095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.546318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.546346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.546557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.546728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.546754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.546903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.547215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.547646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.547828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.547977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.548336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.464 [2024-07-23 01:51:24.548770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.464 [2024-07-23 01:51:24.548929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.464 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.549116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.549378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.549402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.549568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.549714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.549739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.549911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.550264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.550566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.550763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.550914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.551243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.551668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.551835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.552005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.552370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.552705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.552903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.553124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.553450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.553757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.553913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.554105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.554485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.554811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.554986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.555190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.555364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.555407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.555636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.555803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.555829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.556015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.556181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.556209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.556936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.557336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.557717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.557891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.558053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.558374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.558700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.558861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.559048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.559186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.559212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.465 qpair failed and we were unable to recover it. 00:30:11.465 [2024-07-23 01:51:24.559381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.465 [2024-07-23 01:51:24.559568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.466 [2024-07-23 01:51:24.559593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.466 qpair failed and we were unable to recover it. 00:30:11.466 [2024-07-23 01:51:24.559756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.466 [2024-07-23 01:51:24.559888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.466 [2024-07-23 01:51:24.559912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.466 qpair failed and we were unable to recover it. 00:30:11.466 [2024-07-23 01:51:24.560085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.738 qpair failed and we were unable to recover it. 00:30:11.738 [2024-07-23 01:51:24.560421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.738 qpair failed and we were unable to recover it. 00:30:11.738 [2024-07-23 01:51:24.560783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.560940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.738 qpair failed and we were unable to recover it. 00:30:11.738 [2024-07-23 01:51:24.561108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.561249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.561274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.738 qpair failed and we were unable to recover it. 00:30:11.738 [2024-07-23 01:51:24.561488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.738 [2024-07-23 01:51:24.561682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.561708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.561899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.562254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.562598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.562818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.562974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.563387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.563713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.563912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.564072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.564342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.564366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.564555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.564739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.564764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.564957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.565358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.565749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.565983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.566147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.566502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.566837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.566998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.567191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.567330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.567354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.567517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.567664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.567689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.567850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.568220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.568545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.568760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.568929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.569308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.569656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.569818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.570010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.570382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.570762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.570944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.571120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.571314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.571356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.739 qpair failed and we were unable to recover it. 00:30:11.739 [2024-07-23 01:51:24.571566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.739 [2024-07-23 01:51:24.571712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.571738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.571879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.572230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.572577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.572770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.572903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.573193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.573538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.573754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.573884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.574248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.574610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.574815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.574976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.575348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.575671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.575865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.576023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.576198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.576226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.576430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.576600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.576648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.576812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.577193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.577542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.577698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.577869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.578223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.578602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.578809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.578975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.579334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.579694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.579882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.580019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.580209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.580251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.580429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.580626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.580651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.580826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.580986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.581011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.740 qpair failed and we were unable to recover it. 00:30:11.740 [2024-07-23 01:51:24.581157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.740 [2024-07-23 01:51:24.581316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.581340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.581502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.581643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.581668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.581829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.582219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.582595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.582818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.583034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.583413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.583732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.583919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.584047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.584389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.584745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.584941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.585084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.585389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.585754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.585942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.586077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.586208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.586234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.586426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.586610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.586643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.586829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.587216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.587606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.587810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.587950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.588295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.588669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.588871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.589054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.589226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.589258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.589462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.589649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.589675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.589816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.589978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.590002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.590192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.590403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.590430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.590585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.590731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.590757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.741 [2024-07-23 01:51:24.590922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.591090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.741 [2024-07-23 01:51:24.591116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.741 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.591338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.591491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.591518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.591709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.591879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.591904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.592073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.592240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.592267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.592478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.592687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.592713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.592898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.593297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.593714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.593909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.594081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.594245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.594269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.594434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.594645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.594673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.594848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.595366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.595738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.595953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.596133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.596315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.596340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.596520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.596705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.596734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.596914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.597276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.597668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.597880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.598041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.598329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.598749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.598998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.599170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.599334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.599358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.599550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.599728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.599753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.599940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.600315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.600606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.600807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.601019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.601230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.601257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.601412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.601548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.601572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.742 qpair failed and we were unable to recover it. 00:30:11.742 [2024-07-23 01:51:24.601740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.742 [2024-07-23 01:51:24.601899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.601924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.602086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.602414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.602757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.602965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.603176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.603355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.603382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.603565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.603780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.603808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.603992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.604202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.604252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.604462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.604716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.604744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.604908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.605287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.605638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.605856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.605998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.606376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.606761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.606995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.607181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.607342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.607381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.607540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.607729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.607758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.607914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.608282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.608681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.608892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.609075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.609260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.609284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.609467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.609726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.609751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.609890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.610086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.743 [2024-07-23 01:51:24.610113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.743 qpair failed and we were unable to recover it. 00:30:11.743 [2024-07-23 01:51:24.610265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.610409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.610435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.610623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.610808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.610835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.611008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.611364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.611791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.611962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.612165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.612351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.612391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.612569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.612782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.612810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.612996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.613178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.613205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.613385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.613587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.613628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.613787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.614275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.614632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.614884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.615042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.615274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.615322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.615480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.615625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.615650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.615842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.616347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.616682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.616865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.617027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.617224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.617288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.617472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.617654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.617682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.617863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.618246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.618618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.618783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.618946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.619228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.619290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.619499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.619654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.619682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.619839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.620149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.620211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.620383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.620589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.620624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.620803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.620989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.621016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.744 qpair failed and we were unable to recover it. 00:30:11.744 [2024-07-23 01:51:24.621217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.744 [2024-07-23 01:51:24.621353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.621378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.621540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.621698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.621724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.621914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.622123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.622174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.622331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.622537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.622564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.622772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.623281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.623687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.623985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.624171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.624312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.624352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.624543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.624740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.624765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.624953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.625172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.625197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.625387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.625598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.625634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.625786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.625989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.626016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.626175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.626320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.626347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.626560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.626746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.626774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.626957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.627333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.627780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.627970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.628126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.628288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.628329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.628539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.628696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.628725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.628910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.629272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.629695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.629894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.630081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.630294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.630346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.630550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.630728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.630756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.630961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.631206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.631231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.631435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.631576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.631600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.631820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.631998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.632025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.745 [2024-07-23 01:51:24.632206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.632448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.745 [2024-07-23 01:51:24.632499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.745 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.632665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.632861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.632885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.633059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.633437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.633825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.633983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.634143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.634474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.634527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.634705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.634887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.634915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.635103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.635262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.635303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.635511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.635685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.635713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.635919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.636280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.636668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.636851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.637050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.637342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.637731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.637914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.638093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.638321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.638375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.638564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.638744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.638772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.638932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.639136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.639203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.639421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.639604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.639636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.639803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.640199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.640663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.640901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.641043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.641380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.641757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.641928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.642101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.642267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.642307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.642511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.642722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.642748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.642917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.643105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.643132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.746 qpair failed and we were unable to recover it. 00:30:11.746 [2024-07-23 01:51:24.643346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.746 [2024-07-23 01:51:24.643533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.643559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.643744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.643916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.643943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.644148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.644353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.644380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.644557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.644728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.644754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.644966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.645236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.645293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.645501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.645667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.645694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.645861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.646162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.646218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.646400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.646585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.646610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.646835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.647288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.647611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.647828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.648025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.648243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.648294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.648445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.648639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.648690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.648849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.649283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.649728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.649936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.650118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.650299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.650325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.650505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.650655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.650683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.650837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.651285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.651661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.651894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.652053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.652218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.652244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.652432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.652622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.652650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.652798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.652976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.653065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.653252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.653442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.653502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.653658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.653852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.653879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.654068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.654226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.654267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.747 [2024-07-23 01:51:24.654478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.654634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.747 [2024-07-23 01:51:24.654662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.747 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.654809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.655303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.655711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.655930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.656111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.656351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.656401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.656576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.656766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.656795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.656983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.657334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.657707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.657883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.658072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.658228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.658252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.658461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.658672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.658699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.658883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.659298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.659599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.659856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.660045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.660234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.660261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.660472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.660619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.660644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.660811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.661277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.661700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.661862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.662048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.662229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.662256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.662473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.662638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.662662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.662797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.663265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.748 [2024-07-23 01:51:24.663678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.748 [2024-07-23 01:51:24.663861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.748 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.664043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.664237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.664296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.664486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.664664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.664692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.664900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.665272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.665665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.665832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.665981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.666335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.666772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.666994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.667195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.667329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.667353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.667517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.667677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.667703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.667910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.668255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.668640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.668820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.668987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.669210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.669269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.669495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.669679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.669719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.669884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.670385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.670786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.670996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.671159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.671346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.671371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.671511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.671703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.671731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.671909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.672091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.672120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.672303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.672512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.672538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.672737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.672962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.673016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.673223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.673393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.673419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.673646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.673861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.673888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.674069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.674242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.674269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.674447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.674643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.749 [2024-07-23 01:51:24.674668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.749 qpair failed and we were unable to recover it. 00:30:11.749 [2024-07-23 01:51:24.674811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.674978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.675002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.675156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.675318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.675345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.675524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.675705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.675730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.675868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.676291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.676643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.676938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.677131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.677300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.677324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.677541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.677760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.677788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.678005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.678233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.678257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.678446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.678683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.678708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.678871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.679182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.679238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.679416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.679596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.679630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.679811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.680267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.680710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.680987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.681178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.681319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.681344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.681526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.681685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.681713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.681899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.682379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.682776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.682958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.683147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.683306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.683346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.683533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.683723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.683750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.683956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.684292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.684684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.684851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.685046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.685396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.750 [2024-07-23 01:51:24.685714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.750 [2024-07-23 01:51:24.685889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.750 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.686049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.686207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.686231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.686391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.686576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.686604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.686783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.686970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.687011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.687193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.687345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.687372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.687526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.687718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.687743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.687907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.688318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.688701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.688907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.689060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.689202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.689226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.689416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.689610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.689654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.689836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.690015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.690042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.690250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.690538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.690594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.690802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.690968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.691008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.691196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.691394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.691419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.691558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.691730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.691756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.691975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.692298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.692349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.692542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.692727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.692756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.692922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.693101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.693129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.693342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.693551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.693578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.693761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.694244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.694712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.694926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.695084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.695249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.695273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.695438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.695626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.695670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.695845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.696244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.696618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.696811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.751 qpair failed and we were unable to recover it. 00:30:11.751 [2024-07-23 01:51:24.696988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.751 [2024-07-23 01:51:24.697161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.697186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.697406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.697609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.697647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.697796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.697960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.697989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.698160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.698354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.698379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.698543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.698728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.698759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.698984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.699186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.699231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.699415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.699585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.699629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.699819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.700357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.700724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.700923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.701066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.701209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.701235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.701403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.701583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.701610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.701790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.701987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.702041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.702200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.702378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.702406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.702630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.702797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.702822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.703014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.703414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.703788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.703968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.704146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.704352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.704406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.704630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.704788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.704813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.704977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.705286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.705688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.705880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.706080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.706235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.706262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.706442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.706599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.706637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.706833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.707381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.752 [2024-07-23 01:51:24.707792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.752 [2024-07-23 01:51:24.707977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.752 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.708189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.708368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.708397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.708611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.708831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.708858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.709078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.709242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.709266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.709427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.709594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.709645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.710223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.710626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.710820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.710988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.711297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.711659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.711880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.712074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.712262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.712289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.712438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.712644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.712672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.712890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.713370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.713742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.713918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.714112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.714273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.714298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.714438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.714623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.714651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.714836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.715195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.715569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.715763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.715947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.716378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.753 [2024-07-23 01:51:24.716811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.753 [2024-07-23 01:51:24.716998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.753 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.717175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.717445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.717470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.717639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.717824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.717851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.718067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.718229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.718270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.718477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.718644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.718687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.718877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.719209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.719606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.719830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.719972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.720353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.720722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.720944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.721082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.721257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.721281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.721445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.721584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.721633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.721800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.722236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.722722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.722915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.723080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.723223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.723247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.723440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.723636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.723661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.723842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.724209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.724605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.724795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.724955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.725352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.725784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.725956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.726151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.726333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.726360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.726540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.726760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.726786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.727002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.727280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.727335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.727527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.727723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.754 [2024-07-23 01:51:24.727751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.754 qpair failed and we were unable to recover it. 00:30:11.754 [2024-07-23 01:51:24.727937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.728362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.728789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.728994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.729208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.729369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.729411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.729632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.729823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.729850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.730056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.730266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.730314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.730465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.730639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.730667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.730852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.731338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.731720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.731933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.732102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.732253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.732281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.732494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.732700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.732765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.732957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.733392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.733723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.733948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.734159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.734347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.734373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.734534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.734697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.734726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.734915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.735187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.735237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.735421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.735608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.735643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.735832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.735983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.736009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.736171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.736332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.736374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.736563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.736752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.736780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.736938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.737330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.737726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.737956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.738160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.738461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.738521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.738749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.738972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.739000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.755 qpair failed and we were unable to recover it. 00:30:11.755 [2024-07-23 01:51:24.739159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.755 [2024-07-23 01:51:24.739350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.739375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.739592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.739814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.739839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.739983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.740314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.740723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.740927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.741116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.741253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.741296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.741485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.741677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.741702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.741862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.742261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.742680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.742844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.743070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.743230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.743256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.743388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.743550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.743594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.743813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.744002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.744030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.744206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.744502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.744554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.744753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.744936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.745001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.745208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.745403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.745459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.745642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.745800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.745825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.745991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.746385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.746796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.746974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.747192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.747394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.747440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.747669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.747808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.747832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.748054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.748324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.748351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.748567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.748745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.748771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.748933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.749322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.749725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.749914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.750099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.750282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.756 [2024-07-23 01:51:24.750322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.756 qpair failed and we were unable to recover it. 00:30:11.756 [2024-07-23 01:51:24.750499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.750659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.750686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.750914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.751376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.751781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.751968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.752168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.752415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.752467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.752769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.752914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.752938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.753104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.753267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.753309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.753517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.753696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.753723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.753930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.754153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.754203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.754386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.754521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.754564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.754756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.754957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.755003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.755188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.755376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.755401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.755586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.755804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.755832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.756028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.756186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.756227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.756517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.756740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.756768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.756950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.757294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.757695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.757910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.758104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.758310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.758337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.758514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.758711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.758737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.758916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.759325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.759650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.759909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.760125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.760339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.760385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.760564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.760769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.760797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.760950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.761113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.761137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.757 qpair failed and we were unable to recover it. 00:30:11.757 [2024-07-23 01:51:24.761278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.757 [2024-07-23 01:51:24.761462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.761488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.761699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.761881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.761905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.762064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.762372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.762425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.762624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.762818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.762845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.762996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.763182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.763209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.763376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.763584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.763611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.763808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.764114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.764172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.764384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.764567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.764594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.764777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.764964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.765014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.765191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.765340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.765368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.765517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.765666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.765695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.765890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.766075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.766100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.766327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.766513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.766540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.766719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.766974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.767022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.767175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.767355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.767382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.767564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.767733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.767761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.768083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.768325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.768371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.768554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.768738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.768765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.768914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.769286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.769637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.769800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.769936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.770102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.770126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.770325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.770540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.770565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.770762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.770979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.771032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.771241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.771446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.771473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.758 qpair failed and we were unable to recover it. 00:30:11.758 [2024-07-23 01:51:24.771649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.771832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.758 [2024-07-23 01:51:24.771856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.772026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.772229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.772256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.772447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.772603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.772632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.772797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.772974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.773012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.773205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.773440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.773485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.773636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.773840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.773867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.774083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.774226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.774252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.774468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.774705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.774733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.774913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.775384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.775778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.775981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.776215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.776428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.776452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.776620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.776756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.776780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.776993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.777275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.777320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.777515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.777698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.777726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.778024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.778328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.778379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.778538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.778686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.778714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.778921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.779173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.779227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.779394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.779583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.779629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.779832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.780261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.780648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.780829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.781000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.781351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.781756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.781993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.782202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.782454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.782513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.782698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.782841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.782865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.783138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.783505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.783553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.759 qpair failed and we were unable to recover it. 00:30:11.759 [2024-07-23 01:51:24.783721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.759 [2024-07-23 01:51:24.783906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.783933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.784121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.784360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.784386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.784570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.784738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.784763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.784950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.785166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.785197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.785413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.785590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.785621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.785781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.785976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.786003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.786223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.786383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.786406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.786572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.786753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.786778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.786907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.787263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.787660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.787863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.788042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.788247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.788274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.788455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.788634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.788661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.788812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.789198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.789623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.789832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.789998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.790365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.790709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.790901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.791086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.791328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.791352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.791559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.791711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.791738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.791925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.792326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.792744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.792952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.793147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.793322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.793348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.793532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.793710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.793736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.793912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.794075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.794099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.794229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.794419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.760 [2024-07-23 01:51:24.794442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.760 qpair failed and we were unable to recover it. 00:30:11.760 [2024-07-23 01:51:24.794646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.794844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.794871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.795063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.795231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.795257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.795453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.795643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.795684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.795845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.796268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.796629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.796821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.796994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.797146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.797170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.797381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.797562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.797590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.797780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.798274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.798676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.798898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.799192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.799450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.799501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.799699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.799866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.799889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.800081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.800274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.800298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.800475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.800620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.800659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.800813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.801211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.801610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.801797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.801991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.802368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.802698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.802938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.803116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.803267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.803294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.803500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.803670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.803696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.803869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.804278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.804637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.804822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.804964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.805128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.805155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.761 [2024-07-23 01:51:24.805379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.805561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.761 [2024-07-23 01:51:24.805588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.761 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.805786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.806260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.806636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.806805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.806999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.807407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.807849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.807999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.808162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.808346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.808374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.808594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.808757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.808785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.808979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.809139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.809179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.809414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.809624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.809652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.809837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.810294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.810687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.810923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.811078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.811286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.811346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.811522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.811733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.811761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.811944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.812366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.812721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.812954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.813143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.813344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.813400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.813559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.813741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.813769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.813928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.814305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.814703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.762 [2024-07-23 01:51:24.814883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.762 qpair failed and we were unable to recover it. 00:30:11.762 [2024-07-23 01:51:24.815040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.815208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.815273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.815469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.815632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.815657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.815822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.816299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.816739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.816926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.817109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.817401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.817454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.817654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.817829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.817856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.818049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.818207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.818232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.818452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.818639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.818667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.818863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.819230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.819619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.819783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.819950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.820311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.820684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.820894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.821054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.821388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.821704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.821902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.822081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.822267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.822294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.822501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.822640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.822691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:11.763 [2024-07-23 01:51:24.822879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.823055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.763 [2024-07-23 01:51:24.823081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:11.763 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.823250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.823396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.823422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.823576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.823744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.823772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.823937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.824256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.824589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.824835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.824976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.825387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.825742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.825962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.826114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.826333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.826356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.035 qpair failed and we were unable to recover it. 00:30:12.035 [2024-07-23 01:51:24.826530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.035 [2024-07-23 01:51:24.826696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.826721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.826920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.827275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.827677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.827862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.828041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.828217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.828244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.828450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.828619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.828643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.828780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.828962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.829002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.829192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.829367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.829393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.829574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.829764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.829792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.829960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.830290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.830642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.830882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.831038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.831220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.831250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.831437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.831650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.831677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.831864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.832265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.832631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.832841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.833010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.833363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.833753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.833998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.834156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.834321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.834345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.834490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.834674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.834702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.834929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.835240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.835292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.835498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.835689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.835713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.835876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.836291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.836617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.836813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.036 [2024-07-23 01:51:24.837006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.837250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.036 [2024-07-23 01:51:24.837302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.036 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.837485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.837706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.837733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.837897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.838308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.838637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.838852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.839035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.839276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.839329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.839514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.839678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.839718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.839938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.840297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.840356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.840529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.840731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.840759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.840941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.841292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.841675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.841897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.842057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.842295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.842348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.842530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.842736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.842763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.842931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.843316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.843704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.843911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.844074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.844236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.844260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.844461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.844672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.844700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.844883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.845209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.845256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.845415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.845585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.845627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.845780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.846265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.846667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.846855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.847166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.847418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.847442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.847631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.847815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.847840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.848009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.848240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.848300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.848483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.848696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.848720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.037 qpair failed and we were unable to recover it. 00:30:12.037 [2024-07-23 01:51:24.848892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.849073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.037 [2024-07-23 01:51:24.849099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.849275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.849426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.849454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.849674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.849813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.849837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.850018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.850385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.850794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.850960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.851095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.851407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.851771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.851968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.852131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.852297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.852326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.852491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.852657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.852682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.852871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.853350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.853729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.853897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.854064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.854408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.854783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.854964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.855160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.855320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.855360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.855537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.855699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.855724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.855886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.856351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.856711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.856920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.857094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.857231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.857256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.857418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.857647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.857672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.857855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.858246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.858609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.858829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.858999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.859158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.859182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.038 qpair failed and we were unable to recover it. 00:30:12.038 [2024-07-23 01:51:24.859345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.038 [2024-07-23 01:51:24.859557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.859581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.859753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.859893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.859932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.860111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.860372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.860424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.860643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.860798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.860822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.861001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.861386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.861767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.861937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.862101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.862262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.862287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.862499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.862644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.862671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.862845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.863317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.863664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.863834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.863994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.864431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.864777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.864948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.865113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.865319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.865342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.865512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.865647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.865671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.865869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.866249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.866606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.866815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.867001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.867418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.867790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.867968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.868195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.868404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.868428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.868610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.868796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.868823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.869008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.869390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.869763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.869937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.039 qpair failed and we were unable to recover it. 00:30:12.039 [2024-07-23 01:51:24.870151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.039 [2024-07-23 01:51:24.870333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.870393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.870541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.870754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.870779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.870914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.871327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.871736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.871921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.872075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.872251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.872312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.872511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.872721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.872746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.872911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.873290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.873719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.873931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.874101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.874332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.874384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.874568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.874713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.874740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.874938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.875232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.875633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.875798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.875968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.876316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.876779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.876961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.877152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.877355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.877405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.877586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.877753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.877778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.877916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.878317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.878686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.040 [2024-07-23 01:51:24.878847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.040 qpair failed and we were unable to recover it. 00:30:12.040 [2024-07-23 01:51:24.879031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.879386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.879825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.879990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.880180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.880336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.880363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.880516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.880686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.880713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.880932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.881268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.881690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.881900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.882087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.882257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.882298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.882494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.882661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.882686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.882842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.883213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.883598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.883813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.883994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.884246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.884303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.884485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.884662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.884689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.884853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.885308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.885718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.885899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.886053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.886256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.886317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.886532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.886715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.886744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.886978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.887251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.887275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.887455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.887637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.887665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.887843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.888247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.888627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.888810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.888981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.889198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.889250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.889462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.889652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.889676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.041 [2024-07-23 01:51:24.889823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.890001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.041 [2024-07-23 01:51:24.890028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.041 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.890206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.890421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.890445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.890617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.890834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.890861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.891069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.891215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.891241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.891426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.891637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.891679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.891851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.892136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.892187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.892368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.892571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.892598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.892764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.892983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.893009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.893199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.893341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.893366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.893500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.893727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.893755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.893980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.894177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.894227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.894404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.894591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.894635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.894824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.895381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.895802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.895985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.896129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.896283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.896309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.896502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.896658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.896701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.896894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.897323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.897714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.897987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.898176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.898356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.898385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.898567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.898767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.898793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.898985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.899180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.899204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.899385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.899581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.899605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.899785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.899949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.900005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.900278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.900429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.900458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.900645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.900829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.900853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.901020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.901219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.901272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.042 qpair failed and we were unable to recover it. 00:30:12.042 [2024-07-23 01:51:24.901487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.901700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.042 [2024-07-23 01:51:24.901727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.901897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.902059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.902106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.902327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.902530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.902557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.902779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.902946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.903003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.903191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.903444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.903499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.903692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.903849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.903873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.904100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.904311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.904361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.904542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.904715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.904740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.904871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.905284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.905670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.905879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.906071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.906267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.906291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.906486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.906649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.906690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.907022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.907385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.907432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.907622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.907786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.907810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.908046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.908403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.908797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.908982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.909212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.909429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.909489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.909695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.909853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.909879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.910081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.910241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.910298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.910472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.910743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.910771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.910981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.911204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.911228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.911435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.911611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.911700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.911887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.912053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.912077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.912261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.912554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.912602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.912774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.912991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.913042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.913244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.913465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.913513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.043 qpair failed and we were unable to recover it. 00:30:12.043 [2024-07-23 01:51:24.913697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.043 [2024-07-23 01:51:24.913859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.913882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.914111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.914315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.914339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.914507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.914676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.914702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.914885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.915241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.915550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.915732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.915897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.916265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.916674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.916887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.917072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.917263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.917286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.917493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.917646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.917673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.917854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.918207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.918581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.918810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.919018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.919196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.919250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.919401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.919555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.919580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.919803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.920293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.920698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.920897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.921066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.921224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.921249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.921413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.921584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.921607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.921800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.922342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.922677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.922866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.923029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.923212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.923277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.923514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.923725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.923749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.923936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.924117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.924144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.924323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.924528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.044 [2024-07-23 01:51:24.924553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.044 qpair failed and we were unable to recover it. 00:30:12.044 [2024-07-23 01:51:24.924710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.924916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.924943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.925207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.925368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.925392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.925559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.925749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.925777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.925952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.926343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.926752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.926988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.927146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.927373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.927426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.927646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.927832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.927856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.928021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.928381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.928715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.928892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.929089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.929249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.929288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.929485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.929683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.929711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.929869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.930227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.930592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.930820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.931005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.931210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.931236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.931419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.931598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.931632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.931842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.932263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.932658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.932826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.933018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.933179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.933221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.933378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.933564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.933590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.045 [2024-07-23 01:51:24.933815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.933996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.045 [2024-07-23 01:51:24.934054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.045 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.934215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.934419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.934446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.934630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.934797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.934824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.935004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.935221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.935269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.935414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.935605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.935635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.935802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.935989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.936015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.936193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.936340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.936366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.936577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.936733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.936760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.936918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.937301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.937734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.937908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.938059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.938313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.938368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.938558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.938698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.938724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.938918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.939316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.939633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.939817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.939953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.940351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.940754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.940928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.941099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.941276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.941303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.941512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.941705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.941730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.941865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.942325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.942717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.942928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.943110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.943323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.943374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.943563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.943729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.943753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.943887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.944270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.046 qpair failed and we were unable to recover it. 00:30:12.046 [2024-07-23 01:51:24.944688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.046 [2024-07-23 01:51:24.944895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.945054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.945192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.945231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.945444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.945607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.945656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.945815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.946225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.946692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.946932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.947186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.947392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.947416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.947561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.947717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.947742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.947955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.948257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.948314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.948534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.948678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.948702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.948912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.949250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.949299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.949480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.949692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.949717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.949894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.950343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.950762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.950988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.951192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.951501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.951557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.951748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.951911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.951954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.952145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.952315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.952359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.952597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.952761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.952785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.952924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.953302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.953698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.953910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.954184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.954513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.954563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.954781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.954968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.954992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.955172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.955420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.955471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.955638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.955778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.955818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.956001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.956164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.956187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.956400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.956552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.956583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.047 qpair failed and we were unable to recover it. 00:30:12.047 [2024-07-23 01:51:24.956772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.047 [2024-07-23 01:51:24.956954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.956980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.957157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.957380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.957438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.957594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.957765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.957789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.957956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.958362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.958768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.958950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.959239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.959441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.959468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.959647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.959826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.959852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.960033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.960244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.960293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.960480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.960704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.960737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.960906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.961309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.961650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.961854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.962066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.962225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.962286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.962432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.962609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.962643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.962820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.963345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.963705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.963895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.964080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.964232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.964258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.964440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.964658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.964683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.964847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.965257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.965636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.965805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.966026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.966181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.966208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.966415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.966638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.966666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.966851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.966992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.967018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.967185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.967389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.967413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.967580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.967777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.048 [2024-07-23 01:51:24.967801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.048 qpair failed and we were unable to recover it. 00:30:12.048 [2024-07-23 01:51:24.967979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.968372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.968814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.968985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.969164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.969375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.969401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.969571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.969732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.969756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.969918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.970366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.970779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.970935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.971140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.971351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.971378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.971576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.971762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.971788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.972007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.972435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.972829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.972998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.973155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.973368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.973394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.973609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.973794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.973820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.974002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.974283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.974326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.974531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.974716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.974742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.974930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.975317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.975655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.975885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.976076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.976268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.976291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.976479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.976643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.976669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.976862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.977347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.977723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.049 [2024-07-23 01:51:24.977910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.049 qpair failed and we were unable to recover it. 00:30:12.049 [2024-07-23 01:51:24.978084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.978268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.978294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.978477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.978659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.978686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.978868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.979251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.979704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.979902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.980177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.980369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.980392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.980583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.980775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.980799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.980958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.981300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.981688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.981846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.982066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.982241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.982268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.982474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.982636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.982664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.982849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.983242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.983563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.983780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.983951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.984350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.984730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.984914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.985083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.985244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.985268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.985482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.985632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.985658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.985822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.985982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.986023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.986328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.986503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.986530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.986676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.986851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.986877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.987068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.987413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.987762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.987951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.050 qpair failed and we were unable to recover it. 00:30:12.050 [2024-07-23 01:51:24.988121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.050 [2024-07-23 01:51:24.988278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.988302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.988484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.988717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.988744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.988934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.989096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.989137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.989455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.989687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.989714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.989864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.990269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.990646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.990834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.990998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.991294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.991321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.991499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.991652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.991680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.991872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.992195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.992547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.992769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.992936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.993373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.993738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.993937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.994207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.994415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.994441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.994641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.994838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.994862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.995051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.995280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.995330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.995524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.995708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.995736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.995913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.996383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.996751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.996933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.997113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.997281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.997309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.997522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.997730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.997758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.997923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.998298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.998708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.998937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.999120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.999402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.051 [2024-07-23 01:51:24.999457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.051 qpair failed and we were unable to recover it. 00:30:12.051 [2024-07-23 01:51:24.999642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:24.999859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:24.999886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.000052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.000220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.000244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.000409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.000594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.000628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.000816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.001155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.001209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.001400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.001564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.001588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.001814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.002141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.002195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.002400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.002628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.002656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.002846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.002993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.003032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.003220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.003366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.003395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.003554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.003768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.003793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.003982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.004231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.004290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.004498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.004663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.004687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.004831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.005365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.005767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.005975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.006227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.006394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.006418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.006610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.006772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.006796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.006983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.007293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.007348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.007541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.007675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.007700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.007867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.008179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.008235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.008422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.008632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.008660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.008865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.009053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.009077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.009239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.009494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.009548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.009739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.009961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.010020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.010208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.010386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.010446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.010657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.010850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.010877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.011037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.011279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.011331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.011514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.011729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.052 [2024-07-23 01:51:25.011757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.052 qpair failed and we were unable to recover it. 00:30:12.052 [2024-07-23 01:51:25.011939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.012382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.012760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.012916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.013050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.013207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.013231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.013418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.013609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.013641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.013838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.014215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.014589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.014779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.014929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.015317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.015720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.015932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.016142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.016275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.016298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.016462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.016742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.016770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.016949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.017336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.017749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.017990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.018147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.018297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.018324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.018496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.018661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.018691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.018892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.019184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.019232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.019411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.019606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.019639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.019796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.020297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.020718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.020887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.021097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.021230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.021256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.021458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.021618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.021642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.021843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.022261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.022622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.053 [2024-07-23 01:51:25.022805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.053 qpair failed and we were unable to recover it. 00:30:12.053 [2024-07-23 01:51:25.023010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.023344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.023691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.023905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.024088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.024253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.024292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.024455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.024631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.024659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.024840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.025147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.025200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.025389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.025597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.025628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.025813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.026208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.026608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.026801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.026966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.027131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.027172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.027483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.027719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.027747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.027928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.028289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.028706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.028914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.029119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.029306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.029347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.029539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.029715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.029756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.029934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.030346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.030772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.030973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.031150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.031414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.031462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.031669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.031850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.031877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.032042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.032229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.032254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.054 qpair failed and we were unable to recover it. 00:30:12.054 [2024-07-23 01:51:25.032437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.032652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.054 [2024-07-23 01:51:25.032678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.032836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.033225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.033641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.033847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.034090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.034319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.034346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.034526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.034713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.034741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.034924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.035264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.035611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.035862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.036066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.036246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.036272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.036452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.036666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.036693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.036855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.037017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.037041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.037247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.037584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.037642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.037849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.038261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.038729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.038891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.039033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.039249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.039298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.039474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.039660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.039718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.039907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.040274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.040662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.040839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.041021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.041323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.041373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.041537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.041728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.041769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.041943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.042338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.042685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.042916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.043123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.043324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.055 qpair failed and we were unable to recover it. 00:30:12.055 [2024-07-23 01:51:25.043487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.043677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.055 [2024-07-23 01:51:25.043702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.043842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.044194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.044609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.044807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.044991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.045306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.045675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.045897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.046046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.046228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.046256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.046462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.046628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.046653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.046829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.047287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.047710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.047943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.048117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.048259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.048303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.048486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.048675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.048700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.048894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.049265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.049608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.049817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.050015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.050183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.050225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.050374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.050608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.050636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.050819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.050975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.051002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.051192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.051351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.051375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.051525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.051745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.051772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.051945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.052293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.052646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.052830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.052987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.053349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.053763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.053982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.056 qpair failed and we were unable to recover it. 00:30:12.056 [2024-07-23 01:51:25.054173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.056 [2024-07-23 01:51:25.054419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.054446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.054629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.054816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.054841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.055027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.055404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.055785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.055987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.056277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.056471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.056498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.056693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.056864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.056888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.057080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.057312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.057362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.057547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.057686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.057727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.057913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.058300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.058692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.058905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.059058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.059229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.059271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.059447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.059600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.059637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.059874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.060254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.060601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.060800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.060937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.061271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.061694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.061929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.062140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.062324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.062350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.062538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.062726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.062753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.062945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.063325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.063719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.063909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.064097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.064338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.064387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.064580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.064729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.064754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.064951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.065128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.065155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.057 qpair failed and we were unable to recover it. 00:30:12.057 [2024-07-23 01:51:25.065364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.057 [2024-07-23 01:51:25.065559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.065585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.065772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.065956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.065983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.066149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.066345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.066369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.066529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.066717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.066745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.066928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.067338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.067740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.067950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.068132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.068320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.068345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.068552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.068715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.068742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.068876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.069172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.069226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.069389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.069580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.069605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.069779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.070101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.070155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.070344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.070527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.070554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.070737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.070980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.071028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.071218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.071460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.071512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.071702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.071860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.071887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.072043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.072265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.072326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.072509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.072662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.072690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.072883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.073283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.073666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.073837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.073988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.074329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.074397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.074606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.074809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.074833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.074976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.075160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.075187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.075346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.075539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.075566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.075756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.075984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.076038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.076224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.076484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.076542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.076736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.076964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.077014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.058 qpair failed and we were unable to recover it. 00:30:12.058 [2024-07-23 01:51:25.077204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.058 [2024-07-23 01:51:25.077411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.077438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.077596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.077761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.077788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.077971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.078245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.078292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.078483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.078675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.078703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.078896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.079400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.079767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.079957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.080152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.080378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.080434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.080609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.080783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.080824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.081157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.081458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.081508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.081692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.081869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.081897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.082053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.082239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.082266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.082455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.082589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.082618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.082807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.082982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.083009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.083223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.083413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.083437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.083640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.083821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.083850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.084035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.084268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.084332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.084538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.084757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.084782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.084973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.085312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.085660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.085874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.086030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.086228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.086254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.086462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.086631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.086656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.059 [2024-07-23 01:51:25.086796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.087052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.059 [2024-07-23 01:51:25.087102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.059 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.087327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.087510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.087537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.087749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.087906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.087933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.088123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.088378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.088432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.088583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.088773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.088800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.088998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.089177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.089201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.089358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.089573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.089601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.089804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.089968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.090037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.090223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.090376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.090416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.090627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.090768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.090809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.090989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.091376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.091772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.091975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.092157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.092297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.092321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.092503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.092665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.092693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.092846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.092992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.093019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.093193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.093379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.093407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.093599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.093754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.093781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.093945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.094299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.094728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.094913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.095100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.095327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.095381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.095592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.095724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.095749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.095916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.096307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.096744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.096979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.097161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.097348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.097389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.097540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.097757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.097782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.060 qpair failed and we were unable to recover it. 00:30:12.060 [2024-07-23 01:51:25.097967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.060 [2024-07-23 01:51:25.098177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.098201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.098365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.098543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.098570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.098758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.098954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.098979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.099140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.099295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.099357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.099574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.099741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.099785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.100002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.100177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.100204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.100526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.100756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.100781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.100939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.101381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.101773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.101957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.102136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.102367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.102391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.102610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.102783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.102807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.102981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.103364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.103694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.103849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.103988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.104358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.104754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.104907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.105098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.105334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.105358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.105571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.105750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.105775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.105936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.106324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.106717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.106925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.107078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.107259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.107283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.107471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.107710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.107737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.107950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.108231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.108283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.108488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.108673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.108701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.108886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.109082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.109105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.061 qpair failed and we were unable to recover it. 00:30:12.061 [2024-07-23 01:51:25.109269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.061 [2024-07-23 01:51:25.109459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.109485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.109677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.109828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.109854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.110087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.110261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.110287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.110434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.110626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.110653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.110843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.110986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.111011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.111213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.111373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.111414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.111592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.111771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.111798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.111946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.112441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.112749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.112960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.113156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.113496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.113546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.113736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.113890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.113924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.114135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.114299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.114339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.114499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.114641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.114666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.114948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.115220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.115270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.115485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.115632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.115660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.115832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.115993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.116016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.116178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.116367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.116429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.116655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.116820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.116850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.117022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.117211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.117238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.117414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.117572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.117596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.117796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.118209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.118560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.118787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.118974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.119322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.119690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.119881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.120064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.120291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.062 [2024-07-23 01:51:25.120317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.062 qpair failed and we were unable to recover it. 00:30:12.062 [2024-07-23 01:51:25.120475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.120669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.120694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.120862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.121206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.121564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.121761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.121902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.122283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.122678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.122843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.122998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.123359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.123761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.123985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.124135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.124381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.124432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.124618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.124810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.124835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.125029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.125182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.125209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.125374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.125557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.125597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.334 qpair failed and we were unable to recover it. 00:30:12.334 [2024-07-23 01:51:25.125785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.334 [2024-07-23 01:51:25.125942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.126004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.126206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.126366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.126390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.126558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.126746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.126770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.126937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.127319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.127702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.127861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.128047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.128181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.128208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.128793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.129265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.129677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.129866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.130063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.130236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.130265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.130406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.130579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.130604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.130785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.130979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.131004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.131168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.131329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.131371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.131540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.131716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.131741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.131933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.132326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.132650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.132838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.133022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.133262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.133290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.133452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.133638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.133666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.133810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.133992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.134020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.134182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.134348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.134382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.134580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.134753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.134780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.134956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.135173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.135200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.135406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.135585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.135610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.135836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.136187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.335 [2024-07-23 01:51:25.136602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.335 [2024-07-23 01:51:25.136778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.335 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.136922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.137293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.137633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.137827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.137979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.138304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.138670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.138835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.139007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.139361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.139745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.139901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.140072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.140234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.140270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.140484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.140666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.140691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.140832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.141197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.141628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.141790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.141932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.142264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.142626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.142819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.142960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.143341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.143737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.143950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.144277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.144513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.144540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.144708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.144876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.144900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.145061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.145248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.145274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.145446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.145633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.145660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.145836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.145977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.146001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.146215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.146452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.146496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.146688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.146851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.146875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.336 qpair failed and we were unable to recover it. 00:30:12.336 [2024-07-23 01:51:25.147016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.336 [2024-07-23 01:51:25.147160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.147186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.147320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.147487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.147528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.147726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.147890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.147931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.148113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.148303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.148353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.148539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.148714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.148738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.148909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.149301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.149747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.149916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.150075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.150235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.150259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.150451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.150632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.150680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.150826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.150993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.151017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.151179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.151345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.151369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.151528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.151673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.151699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.151863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.152254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.152583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.152798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.152938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.153264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.153682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.153874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.154024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.154229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.154255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.154454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.154637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.154690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.154861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.155241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.155596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.155827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.156004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.156209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.156236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.156450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.156637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.156673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.156868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.157043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.157087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.157248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.157426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.157453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.337 qpair failed and we were unable to recover it. 00:30:12.337 [2024-07-23 01:51:25.157652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.337 [2024-07-23 01:51:25.157812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.157837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.158001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.158322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.158667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.158923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.159138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.159300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.159324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.159502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.159683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.159708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.159871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.160240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.160594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.160769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.160911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.161300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.161687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.161861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.162029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.162223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.162263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.162442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.162657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.162685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.162899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.163270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.163642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.163834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.163997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.164270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.164321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.164511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.164695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.164724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.164879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.165242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.165588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.165794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.165996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.166398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.166811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.166972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.167116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.167279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.167306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.167501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.167658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.167685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.167859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.168034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.338 [2024-07-23 01:51:25.168058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.338 qpair failed and we were unable to recover it. 00:30:12.338 [2024-07-23 01:51:25.168195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.168363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.168387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.168542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.168675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.168704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.168874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.169264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.169655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.169822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.169985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.170164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.170190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.170395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.170575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.170602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.170820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.171201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.171557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.171769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.171913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.172307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.172636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.172875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.173171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.173459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.173483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.173630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.173769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.173794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.174029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.174399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.174771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.174939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.175148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.175315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.175339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.175502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.175652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.175677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.175841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.176194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.339 qpair failed and we were unable to recover it. 00:30:12.339 [2024-07-23 01:51:25.176611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.339 [2024-07-23 01:51:25.176861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.177043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.177258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.177286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.177447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.177660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.177688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.177874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.178311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.178634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.178827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.179001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.179375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.179794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.179997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.180193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.180383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.180425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.180605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.180779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.180804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.180999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.181351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.181736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.181936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.182100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.182237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.182277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.182458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.182669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.182696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.182875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.183295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.183700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.183925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.184187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.184373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.184418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.184601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.184830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.184859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.185019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.185202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.185257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.185561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.185800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.185829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.186075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.186361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.186386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.186545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.186682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.186707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.186897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.187212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.187271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.187474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.187699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.187728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.187925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.188240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.188289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.340 qpair failed and we were unable to recover it. 00:30:12.340 [2024-07-23 01:51:25.188474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.340 [2024-07-23 01:51:25.188658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.188687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.188879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.189299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.189768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.189987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.190179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.190344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.190370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.190555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.190743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.190771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.190929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.191284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.191755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.191956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.192130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.192320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.192346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.192505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.192690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.192715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.192884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.193245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.193629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.193903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.194111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.194320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.194370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.194532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.194758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.194786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.194972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.195359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.195713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.195879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.196047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.196205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.196229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.196397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.196557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.196585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.196828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.197301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.197665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.197845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.198038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.198251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.198276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.198446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.198630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.198665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.198822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.198987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.199029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.199258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.199476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.199500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.341 qpair failed and we were unable to recover it. 00:30:12.341 [2024-07-23 01:51:25.199654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.341 [2024-07-23 01:51:25.199844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.199871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.200068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.200235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.200259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.200425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.200567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.200606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.200834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.201280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.201702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.201893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.202087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.202270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.202297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.202476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.202723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.202751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.202946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.203294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.203674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.203852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.204081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.204337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.204364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.204555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.204731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.204757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.204940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.205406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.205783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.205970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.206136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.206297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.206321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.206490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.206624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.206665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.206852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.207256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.207691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.207896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.208078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.208317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.208363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.208547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.208714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.208739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.208911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.209317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.209695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.209925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.210089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.210304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.210367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.210587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.210776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.342 [2024-07-23 01:51:25.210805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.342 qpair failed and we were unable to recover it. 00:30:12.342 [2024-07-23 01:51:25.210996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.211159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.211184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.211390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.211598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.211628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.211822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.211999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.212043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.212229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.212383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.212410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.212564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.212759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.212784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.212945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.213348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.213758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.213923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.214091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.214463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.214772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.214937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.215114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.215294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.215321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.215512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.215680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.215723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.215897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.216350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.216764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.216992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.217154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.217333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.217358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.217539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.217722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.217750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.217938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.218149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.218177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.218401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.218590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.218626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.218817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.218979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.219004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.219163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.219369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.219395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.219560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.219757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.219785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.219959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.220225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.220250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.220432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.220619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.220646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.220808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.220990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.221017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.343 qpair failed and we were unable to recover it. 00:30:12.343 [2024-07-23 01:51:25.221206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.221423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.343 [2024-07-23 01:51:25.221450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.221651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.221816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.221840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.222009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.222198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.222222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.222429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.222610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.222643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.222812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.222989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.223015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.223189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.223396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.223442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.223627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.223818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.223842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.224011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.224191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.224218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.224442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.224602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.224635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.224818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.225265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.225695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.225872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.226069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.226251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.226283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.226467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.226692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.226717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.226851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.226994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.227019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.227182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.227366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.227393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.227557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.227733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.227759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.227896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.228262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.228584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.228796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.228962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.229351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.229793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.229979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.230142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.230334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.230359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.230552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.230756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.230780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.344 qpair failed and we were unable to recover it. 00:30:12.344 [2024-07-23 01:51:25.230970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.344 [2024-07-23 01:51:25.231176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.231203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.231381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.231598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.231627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.231820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.232321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.232698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.232918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.233108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.233313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.233357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.233514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.233713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.233739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.233900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.234060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.234092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.234319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.234526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.234550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.234755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.234963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.235007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.235157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.235337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.235365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.235519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.235655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.235681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.235881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.236294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.236715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.236874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.237070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.237263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.237292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.237497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.237675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.237702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.237855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.238304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.238680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.238904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.239108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.239336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.239382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.239562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.239756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.239784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.240004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.240185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.240213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.240377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.240508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.240534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.240749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.240984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.241034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.241217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.241402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.241430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.241611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.241788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.241816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.242004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.242169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.345 [2024-07-23 01:51:25.242211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.345 qpair failed and we were unable to recover it. 00:30:12.345 [2024-07-23 01:51:25.242387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.242587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.242637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.242815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.242992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.243033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.243217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.243391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.243419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.243581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.243754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.243779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.244007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.244275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.244326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.244507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.244694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.244718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.244902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.245302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.245730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.245941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.246116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.246268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.246297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.246475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.246654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.246686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.246865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.247088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.247132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.247379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.247597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.247629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.247792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.247993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.248024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.248216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.248441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.248493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.248691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.248870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.248897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.249143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.249307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.249332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.249464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.249663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.249692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.249870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.250348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.250762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.250965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.251174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.251378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.251422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.251603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.251774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.251801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.251989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.252345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.252735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.252944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.253150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.253361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.253385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.346 [2024-07-23 01:51:25.253546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.253731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.346 [2024-07-23 01:51:25.253759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.346 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.253985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.254184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.254229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.254409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.254587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.254621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.254804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.254973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.255019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.255235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.255430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.255476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.255631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.255808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.255835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.256044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.256344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.256404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.256580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.256742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.256771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.256939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.257318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.257701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.257923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.258108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.258300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.258344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.258531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.258698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.258724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.258867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.259263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.259701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.259878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.260036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.260423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.260810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.260999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.261160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.261418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.261464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.261665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.261830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.261854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.262071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.262276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.262305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.262489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.262695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.262723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.262901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.263356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.263728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.263916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.264064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.264401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.347 [2024-07-23 01:51:25.264713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.347 [2024-07-23 01:51:25.264883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.347 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.265052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.265272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.265296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.265480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.265701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.265752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.265942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.266301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.266697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.266888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.267106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.267287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.267315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.267492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.267692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.267738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.267924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.268301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.268712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.268885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.269077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.269357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.269412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.269593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.269820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.269847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.270062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.270316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.270366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.270576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.270743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.270771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.270953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.271332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.271685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.271894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.272052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.272422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.272759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.272962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.273148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.273387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.273414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.273594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.273817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.273842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.274011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.274178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.274218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.348 qpair failed and we were unable to recover it. 00:30:12.348 [2024-07-23 01:51:25.274406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.348 [2024-07-23 01:51:25.274589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.274622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.274786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.274987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.275032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.275220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.275421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.275465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.275665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.275913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.275968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.276127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.276289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.276329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.276543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.276769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.276796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.276950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.277341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.277753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.277975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.278227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.278389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.278413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.278575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.278714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.278738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.278894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.279313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.279688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.279877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.280096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.280315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.280340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.280499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.280670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.280695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.280864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.281389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.281777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.281989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.282144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.282323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.282352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.282535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.282702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.282727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.282890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.283277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.283666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.283867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.284035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.284228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.284253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.284431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.284590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.284621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.284779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.284962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.285010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.285213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.285411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.349 [2024-07-23 01:51:25.285435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.349 qpair failed and we were unable to recover it. 00:30:12.349 [2024-07-23 01:51:25.285600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.285808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.285836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.286163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.286476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.286525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.286716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.286907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.286959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.287163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.287411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.287471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.287679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.287912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.287939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.288161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.288437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.288481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.288671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.288865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.288896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.289075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.289243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.289283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.289467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.289630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.289654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.289821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.290279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.290687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.290893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.291075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.291281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.291342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.291521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.291744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.291769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.291916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.292260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.292629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.292828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.292999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.293284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.293741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.293946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.294129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.294335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.294380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.294562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.294768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.294795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.294960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.295321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.295700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.295870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.296038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.296236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.296263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.296454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.296720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.296748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.350 [2024-07-23 01:51:25.296906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.297105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.350 [2024-07-23 01:51:25.297149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.350 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.297321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.297507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.297536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.297706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.297875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.297915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.298127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.298302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.298329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.298501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.298667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.298692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.298832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.298993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.299018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.299235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.299417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.299440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.299624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.299805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.299829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.299998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.300353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.300756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.300991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.301183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.301339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.301379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.301575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.301749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.301776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.301973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.302299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.302729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.302910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.303095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.303467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.303778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.303965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.304184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.304361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.304394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.304567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.304720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.304747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.304940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.305297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.305648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.305899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.306074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.306259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.306283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.306414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.306605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.306636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.306850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.306998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.307025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.307170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.307366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.307397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.351 [2024-07-23 01:51:25.307576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.307802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.351 [2024-07-23 01:51:25.307831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.351 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.307984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.308146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.308187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.308399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.308555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.308584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.308780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.309197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.309558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.309754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.309917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.310351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.310675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.310849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.311031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.311408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.311775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.311954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.312137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.312333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.312377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.312556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.312742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.312766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.312953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.313408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.313777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.313975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.314163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.314321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.314345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.314494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.314680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.314708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.314882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.315240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.315633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.315824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.315989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.316314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.316765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.316951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.317138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.317331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.317373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.317584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.317766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.317790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.317973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.318179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.318203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.352 qpair failed and we were unable to recover it. 00:30:12.352 [2024-07-23 01:51:25.318362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.318557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.352 [2024-07-23 01:51:25.318582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.318740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.318903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.318927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.319116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.319358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.319385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.319536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.319767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.319792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.319980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.320233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.320286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.320479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.320639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.320664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.320830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.321128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.321194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.321383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.321572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.321596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.321791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.322270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.322586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.322770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.322924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.323396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.323797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.323956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.324127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.324312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.324336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.324537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.324724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.324751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.324937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.325173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.325219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.325407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.325593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.325628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.325836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.326318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.326717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.326950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.327156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.327310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.327335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.327518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.327711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.327736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.327928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.328096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.328123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.353 [2024-07-23 01:51:25.328279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.328454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.353 [2024-07-23 01:51:25.328481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.353 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.328694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.328883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.328912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.329166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.329412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.329436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.329596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.329763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.329791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.329968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.330315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.330669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.330867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.331052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.331242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.331306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.331488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.331683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.331708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.331852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.332342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.332744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.332909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.333093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.333316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.333366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.333564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.333750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.333775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.333918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.334325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.334704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.334911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.335101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.335307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.335373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.335559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.335730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.335771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.335929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.336325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.336628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.336869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.337076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.337337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.337361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.337548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.337702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.337729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.337918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.338308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.338695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.338862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.339039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.339205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.354 [2024-07-23 01:51:25.339230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.354 qpair failed and we were unable to recover it. 00:30:12.354 [2024-07-23 01:51:25.339445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.339608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.339649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.339826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.340011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.340056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.340361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.340563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.340589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.340805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.341321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.341676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.341889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.342111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.342384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.342435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.342649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.342842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.342870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.343064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.343419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.343772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.343959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.344126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.344289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.344314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.344507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.344716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.344745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.344908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.345099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.345140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.345422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.345658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.345683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.345853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.345995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.346019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.346210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.346347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.346370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.346545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.346679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.346703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.346867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.347282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.347670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.347854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.348017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.348396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.348768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.348986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.349148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.349357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.349384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.349556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.349746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.349771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.349917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.350121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.350148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.355 [2024-07-23 01:51:25.350324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.350532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.355 [2024-07-23 01:51:25.350558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.355 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.350740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.350926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.350953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.351116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.351306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.351332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.351507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.351694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.351722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.351912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.352238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.352624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.352831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.353042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.353210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.353235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.353432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.353610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.353645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.353826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.354316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.354702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.354906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.355087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.355302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.355353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.355538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.355695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.355724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.355937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.356294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.356675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.356871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.357046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.357436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.357768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.357964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.358161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.358347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.358371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.358535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.358702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.358727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.358869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.359255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.359589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.359800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.359971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.360360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.360716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.360924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.356 [2024-07-23 01:51:25.361104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.361262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.356 [2024-07-23 01:51:25.361289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.356 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.361474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.361705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.361730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.361876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.362260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.362642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.362829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.363018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.363224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.363251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.363435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.363568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.363592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.363814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.363983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.364007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.364139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.364277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.364301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.364444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.364637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.364668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.364843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.365236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.365597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.365805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.365950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.366370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.366690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.366876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.367044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.367381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.367821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.367998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.368251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.368444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.368483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.368679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.368822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.368846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.369023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.369268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.369322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.369517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.369679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.369704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.369867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.370229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.370627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.370812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.370984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.371169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.371195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.371375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.371560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.371589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.357 qpair failed and we were unable to recover it. 00:30:12.357 [2024-07-23 01:51:25.371796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.357 [2024-07-23 01:51:25.371987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.372031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.372248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.372418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.372445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.372635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.372793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.372818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.372990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.373323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.373736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.373924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.374069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.374198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.374221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.374436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.374624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.374666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.374814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.374979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.375003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.375137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.375358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.375409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.375601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.375785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.375810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.375952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.376161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.376188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.376394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.376575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.376606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.376817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.376978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.377002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.377175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.377344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.377368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.377529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.377691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.377716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.377860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.378234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.378624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.378822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.378959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.379289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.379712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.379903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.380061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.380194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.380221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-07-23 01:51:25.380407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.380562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.358 [2024-07-23 01:51:25.380591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.380781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.380997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.381024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.381201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.381373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.381400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.381552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.381720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.381745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.381977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.382211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.382235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.382422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.382597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.382628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.382801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.382978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.383019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.383231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.383414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.383441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.383597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.383803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.383827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.383975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.384362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.384776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.384930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.385097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.385392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.385443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.385628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.385808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.385832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.385985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.386389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.386754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.386938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.387129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.387287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.387311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.387522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.387681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.387706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.387851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.388262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.388693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.388860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.389024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.389405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.389715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.389952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.390156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.390506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.390793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.390986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-07-23 01:51:25.391184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.391340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.359 [2024-07-23 01:51:25.391366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.391560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.391730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.391756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.391924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.392289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.392696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.392863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.393030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.393244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.393271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.393498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.393716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.393741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.393873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.394318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.394717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.394885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.395019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.395367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.395791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.395961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.396125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.396313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.396336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.396520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.396714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.396738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.396874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.397281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.397620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.397836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.397997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.398167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.398227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.398451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.398636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.398663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.398839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.398979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.399003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.399140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.399309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.399338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.399512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.399695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.399721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.399891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.400246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.400627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.400829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.400968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.401155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.401180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.401326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.401533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.401560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-07-23 01:51:25.401773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.360 [2024-07-23 01:51:25.401941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.401965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.402097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.402289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.402315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.402494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.402653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.402682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.402836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.403269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.403687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.403876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.404088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.404318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.404368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.404535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.404698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.404723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.404865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.405274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.405677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.405902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.406067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.406385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.406789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.406998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.407161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.407294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.407318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.407535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.407742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.407767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.407932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.408350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.408680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.408911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.409100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.409228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.409253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.409413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.409609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.409640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.409802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.409980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.410012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.410220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.410427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.410453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.410670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.410855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.410882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.411134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.411412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.411459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.411618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.411803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.411831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.411999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.412164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.412188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.412378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.412535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.412562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.361 qpair failed and we were unable to recover it. 00:30:12.361 [2024-07-23 01:51:25.412797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.361 [2024-07-23 01:51:25.412961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.413030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.413214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.413366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.413393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.413582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.413756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.413796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.413956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.414356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.414774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.414947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.415140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.415276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.415300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.415501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.415698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.415730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.415891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.416269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.416654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.416868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.417077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.417443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.417727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.417892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.418060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.418218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.418242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.418439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.418643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.418671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.418873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.419018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.362 [2024-07-23 01:51:25.419045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.362 qpair failed and we were unable to recover it. 00:30:12.362 [2024-07-23 01:51:25.419229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.419456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.419507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.419655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.419799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.419826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.420011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.420407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.420777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.420972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.421163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.421328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.421352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.634 qpair failed and we were unable to recover it. 00:30:12.634 [2024-07-23 01:51:25.421513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.421703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.634 [2024-07-23 01:51:25.421745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.421902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.422261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.422669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.422884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.423077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.423214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.423238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.423419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.423628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.423653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.423834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.424192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.424541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.424758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.424941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.425254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.425305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.425515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.425677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.425704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.425855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.426256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.426634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.426880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.427024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.427429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.427811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.427999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.428170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.428436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.428463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.428691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.428898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.428926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.429111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.429291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.429317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.429505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.429689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.429730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.429948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.430258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.430313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.430534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.430698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.430722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.430862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.431226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.431577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.431754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.431947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.432137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.432162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.635 [2024-07-23 01:51:25.432328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.432492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.635 [2024-07-23 01:51:25.432519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.635 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.432704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.432873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.432897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.433041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.433223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.433250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.433431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.433635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.433663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.433879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.434200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.434549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.434748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.434931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.435280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.435698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.435887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.436044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.436391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.436745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.436936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.437145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.437304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.437328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.437489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.437638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.437680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.437838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.438275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.438608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.438776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.438993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.439396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.439712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.439877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.440069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.440259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.440283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.440439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.440637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.440662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.440864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.441237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.441556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.441745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.441879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.442119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.442169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.442327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.442515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.442539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.636 qpair failed and we were unable to recover it. 00:30:12.636 [2024-07-23 01:51:25.442708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.636 [2024-07-23 01:51:25.442850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.442875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.443068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.443248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.443307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.443542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.443705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.443729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.443898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.444294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.444649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.444834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.444997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.445319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.445699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.445910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.446107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.446242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.446266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.446423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.446611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.446675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.446842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.447356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.447743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.447905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.448219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.448493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.448543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.448742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.448910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.448934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.449102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.449234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.449260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.449457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.449621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.449646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.449792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.449981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.450005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.450228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.450380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.450407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.450597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.450744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.450768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.450905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.451277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.451657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.451827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.451992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.452363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.452777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.452932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.637 [2024-07-23 01:51:25.453132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.453272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.637 [2024-07-23 01:51:25.453312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.637 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.453468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.453654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.453682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.453859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.454267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.454660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.454867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.455038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.455422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.455750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.455961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.456119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.456311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.456335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.456471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.456647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.456674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.456832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.456996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.457037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.457295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.457469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.457495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.457681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.457853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.457879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.458043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.458234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.458274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.458449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.458600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.458651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.458797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.459254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.459649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.459857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.460015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.460368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.460782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.460947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.461073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.461386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.461698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.461866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.462068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.462230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.462271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.462446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.462665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.462693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.462876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.463067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.463091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.463290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.463420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.463444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.638 qpair failed and we were unable to recover it. 00:30:12.638 [2024-07-23 01:51:25.463653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.638 [2024-07-23 01:51:25.463834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.463862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.464084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.464215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.464239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.464402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.464620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.464648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.464830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.465195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.465551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.465766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.465901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.466379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.466782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.466992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.467181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.467327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.467351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.467537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.467697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.467722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.467862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.468350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.468756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.468916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.469083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.469236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.469260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.469454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.469670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.469697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.469878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.470248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.470639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.470842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.471073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.471249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.471273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.471439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.471604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.471635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.471818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.471999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.472027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.472186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.472350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.639 [2024-07-23 01:51:25.472374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.639 qpair failed and we were unable to recover it. 00:30:12.639 [2024-07-23 01:51:25.472510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.472697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.472723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.472904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.473320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.473664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.473825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.473988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.474359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.474754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.474973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.475201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.475356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.475382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.475530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.475750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.475775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.475915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.476263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.476641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.476851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.477100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.477417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.477466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.477682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.477849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.477890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.478091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.478313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.478369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.478553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.478725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.478750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.478888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.479263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.479600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.479784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.479947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.480110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.480150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.480368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.480530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.480571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.480745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.481235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.481634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.481797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.481985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.482200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.482225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.482418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.482553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.482577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.482809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.482998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.483024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.640 qpair failed and we were unable to recover it. 00:30:12.640 [2024-07-23 01:51:25.483203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.483335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.640 [2024-07-23 01:51:25.483361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.483563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.483701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.483726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.483935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.484213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.484277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.484442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.484600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.484658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.484823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.485272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.485655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.485822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.486024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.486189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.486230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.486481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.486624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.486653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.486845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.487321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.487656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.487838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.488030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.488340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.488737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.488948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.489128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.489347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.489396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.489581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.489750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.489775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.489937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.490287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.490700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.490910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.491102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.491275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.491326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.491505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.491688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.491713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.491857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.492314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.492703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.492938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.493174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.493419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.493443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.493580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.493771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.493798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.494004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.494151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.494177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.641 qpair failed and we were unable to recover it. 00:30:12.641 [2024-07-23 01:51:25.494337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.641 [2024-07-23 01:51:25.494547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.494573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.494780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.494923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.494947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.495159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.495414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.495464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.495612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.495777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.495804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.495988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.496153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.496177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.496371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.496579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.496606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.496809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.497237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.497696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.497881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.498102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.498244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.498284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.498434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.498644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.498669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.498805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.499255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.499702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.499868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.500009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.500399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.500793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.500959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.501170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.501413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.501463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.501671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.501857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.501884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.502073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.502432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.502798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.502993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.503149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.503322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.503350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.503520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.503694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.503736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.503913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.504238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.504646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.504885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.505041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.505308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.505360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.642 qpair failed and we were unable to recover it. 00:30:12.642 [2024-07-23 01:51:25.505555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.642 [2024-07-23 01:51:25.505715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.505758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.505944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.506402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.506763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.506979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.507144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.507282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.507306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.507461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.507652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.507680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.507860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.508220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.508619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.508791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.508932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.509358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.509714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.509906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.510073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.510288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.510339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.510557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.510768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.510796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.510988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.511227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.511279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.511474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.511623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.511649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.511792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.511975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.512002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.512187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.512327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.512351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.512521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.512710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.512735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.512944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.513328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.513714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.513898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.514111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.514364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.514417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.514599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.514817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.514844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.515008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.515312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.515734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.515911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.516085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.516237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.516264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.643 qpair failed and we were unable to recover it. 00:30:12.643 [2024-07-23 01:51:25.516475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.643 [2024-07-23 01:51:25.516621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.516646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.516804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.516975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.517001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.517157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.517368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.517393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.517604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.517780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.517804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.517969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.518352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.518731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.518996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.519152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.519334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.519360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.519519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.519690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.519731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.519885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.520295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.520677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.520883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.521063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.521194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.521233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.521444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.521635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.521663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.521824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.522222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.522639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.522811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.522980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.523166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.523218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.523399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.523589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.523619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.523827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.524225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.524625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.524797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.525011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.525273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.525323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.644 qpair failed and we were unable to recover it. 00:30:12.644 [2024-07-23 01:51:25.525468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.644 [2024-07-23 01:51:25.525622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.525649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.525815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.525984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.526008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.526272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.526446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.526472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.526687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.526898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.526925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.527110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.527331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.527379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.527563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.527729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.527770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.527960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.528314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.528706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.528864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.529066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.529253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.529278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.529417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.529624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.529649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.529817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.530287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.530667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.530865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.531112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.531364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.531414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.531601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.531747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.531773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.531916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.532342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.532730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.532911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.533071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.533255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.533299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.533507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.533688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.533716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.533870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.534243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.534588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.534831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.535046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.535256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.535304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.535519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.535682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.535707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.535872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.536080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.536107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.536261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.536450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.645 [2024-07-23 01:51:25.536476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.645 qpair failed and we were unable to recover it. 00:30:12.645 [2024-07-23 01:51:25.536686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.536861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.536888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.537076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.537403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.537781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.537993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.538155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.538364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.538418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.538595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.538778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.538805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.539008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.539289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.539339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.539532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.539698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.539723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.539924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.540279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.540625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.540821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.541031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.541225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.541274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.541453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.541662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.541690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.541882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.542242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.542738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.542944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.543160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.543320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.543350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.543538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.543752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.543780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.544072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.544339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.544363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.544513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.544735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.544763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.544971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.545366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.545748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.545985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.546159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.546351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.546375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.546582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.546769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.546793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.546989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.547288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.646 [2024-07-23 01:51:25.547689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.646 [2024-07-23 01:51:25.547925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.646 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.548119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.548282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.548321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.548529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.548728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.548756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.549018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.549354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.549408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.549594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.549771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.549796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.550000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.550203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.550265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.550429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.550594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.550629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.550862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.551101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.551152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.551364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.551540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.551567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.551754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.551973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.552023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.552219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.552429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.552456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.552620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.552812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.552836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.552980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.553366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.553750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.553940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.554095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.554282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.554309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.554526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.554687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.554729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.554887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.555310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.555699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.555934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.556092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.556273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.556300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.556481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.556683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.556736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.556948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.557341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.557733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.557894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.558081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.558295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.558344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.558504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.558639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.558665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.558831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.559111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.559160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.647 [2024-07-23 01:51:25.559339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.559535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.647 [2024-07-23 01:51:25.559560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.647 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.559704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.559871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.559897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.560077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.560226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.560250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.560438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.560659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.560684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.560849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.561217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.561664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.561878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.562048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.562209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.562233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.562453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.562641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.562666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.562849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.562999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.563027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.563241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.563500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.563527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.563716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.563910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.563969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.564149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.564331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.564362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.564570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.564758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.564786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.564948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.565279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.565682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.565970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.566142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.566357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.566381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.566522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.566698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.566726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.566886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.567311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.567752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.567983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.568188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.568357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.568381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.568529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.568717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.568745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.568922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.569311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.569703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.569896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.570083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.570251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.570278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.648 qpair failed and we were unable to recover it. 00:30:12.648 [2024-07-23 01:51:25.570465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.570646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.648 [2024-07-23 01:51:25.570674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.570824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.571305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.571711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.571926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.572083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.572459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.572813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.572993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.573233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.573396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.573422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.573601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.573799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.573824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.574031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.574335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.574390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.574598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.574771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.574799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.574987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.575308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.575364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.575578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.575762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.575789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.576009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.576375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.576765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.576962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.577171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.577332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.577356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.577561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.577717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.577744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.577934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.578137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.578186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.578437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.578729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.578758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.578920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.579299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.579669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.579829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.580024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.580305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.580332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.649 qpair failed and we were unable to recover it. 00:30:12.649 [2024-07-23 01:51:25.580549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.580736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.649 [2024-07-23 01:51:25.580762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.580901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.581282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.581681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.581874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.582092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.582306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.582355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.582547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.582734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.582759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.582920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.583105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.583145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.583389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.583585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.583612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.583831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.584258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.584641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.584851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.585044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.585429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.585796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.585991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.586184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.586373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.586425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.586604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.586775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.586799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.586955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.587188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.587245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.587433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.587624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.587649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.587819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.588302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.588745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.588938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.589124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.589312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.589339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.589518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.589659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.589699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.589940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.590328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.590723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.590957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.591168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.591320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.591347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.591558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.591766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.591790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.650 qpair failed and we were unable to recover it. 00:30:12.650 [2024-07-23 01:51:25.592005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.650 [2024-07-23 01:51:25.592178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.592205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.592405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.592581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.592608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.592793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.593025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.593077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.593360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.593536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.593564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.593794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.594312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.594699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.594906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.595143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.595343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.595370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.595518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.595734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.595759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.595974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.596331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.596696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.596904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.597044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.597244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.597272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.597432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.597617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.597645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.597834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.598273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.598718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.598928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.599112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.599429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.599774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.599961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.600173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.600354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.600381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.600586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.600768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.600793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.600929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.601292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.601638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.601854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.601993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.602360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.602753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.602956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.651 qpair failed and we were unable to recover it. 00:30:12.651 [2024-07-23 01:51:25.603140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.651 [2024-07-23 01:51:25.603288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.603316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.603477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.603713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.603738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.603882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.604094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.604158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.604451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.604722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.604747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.604912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.605348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.605751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.605939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.606129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.606370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.606419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.606600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.606774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.606816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.606979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.607155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.607182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.607373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.607566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.607594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.607784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.608258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.608641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.608825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.608968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.609299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.609648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.609856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.610040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.610274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.610324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.610543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.610738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.610763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.610918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.611360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.611729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.611883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.612074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.612320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.612347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.612535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.612723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.612748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.612916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.613185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.613235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.613481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.613720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.613745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.613906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.614061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.614128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.614455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.614705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.652 [2024-07-23 01:51:25.614730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.652 qpair failed and we were unable to recover it. 00:30:12.652 [2024-07-23 01:51:25.614875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.615198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.615611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.615824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.616103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.616379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.616428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.616584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.616779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.616803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.616968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.617114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.617154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.617341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.617549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.617575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.617777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.618231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.618720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.618926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.619115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.619395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.619447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.619629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.619785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.619809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.619967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.620135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.620162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.620367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.620571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.620598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.620814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.621119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.621176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.621490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.621718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.621746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.621935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.622310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.622736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.622919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.623192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.623523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.623578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.623760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.623899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.623938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.624093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.624268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.624294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.624468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.624684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.624709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.624870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.625084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.653 [2024-07-23 01:51:25.625111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.653 qpair failed and we were unable to recover it. 00:30:12.653 [2024-07-23 01:51:25.625327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.625533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.625560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.625766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.625931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.625955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.626098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.626302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.626328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.626508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.626713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.626737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.626874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.627291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.627710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.627919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.628132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.628307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.628333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.628507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.628719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.628743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.628882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.629266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.629609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.629875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.630034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.630244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.630298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.630514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.630668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.630696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.630854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.630981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.631005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.631203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.631379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.631406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.631593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.631782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.631806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.632014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.632282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.632331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.632544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.632752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.632777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.632952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.633223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.633247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.633444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.633638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.633663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.633830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.634242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.634618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.634822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.635010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.635298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.635354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.635548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.635735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.635760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.635930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.636121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.636145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.654 [2024-07-23 01:51:25.636347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.636527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.654 [2024-07-23 01:51:25.636554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.654 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.636718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.636891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.636931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.637140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.637449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.637505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.637675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.637845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.637885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.638091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.638271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.638298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.638471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.638671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.638699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.638869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.639265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.639664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.639833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.639995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.640162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.640193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.640409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.640584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.640611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.640781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.641357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.641790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.641980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.642206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.642369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.642393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.642582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.642775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.642800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.642985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.643298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.643324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.643504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.643671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.643696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.643863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.644269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.644697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.644884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.645083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.645439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.645754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.645908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.646113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.646324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.646348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.646524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.646709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.646734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.646947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.647127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.647154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.647362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.647548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.647575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.655 qpair failed and we were unable to recover it. 00:30:12.655 [2024-07-23 01:51:25.647770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.655 [2024-07-23 01:51:25.647922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.647951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.648128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.648350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.648377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.648566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.648748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.648773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.648916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.649263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.649643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.649825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.650003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.650229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.650283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.650491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.650683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.650709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.650872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.651337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.651734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.651977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.652162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.652341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.652368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.652528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.652697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.652721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.652866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.653192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.653620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.653806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.654048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.654281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.654306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.654491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.654668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.654707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.654891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.655293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.655618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.655777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.655981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.656165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.656206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.656413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.656596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.656628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.656837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.657163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.657558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.657794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.657972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.658157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.658184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.658360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.658500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.658525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.656 qpair failed and we were unable to recover it. 00:30:12.656 [2024-07-23 01:51:25.658688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.656 [2024-07-23 01:51:25.658854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.658894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.659069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.659218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.659245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.659447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.659633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.659660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.659855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.660306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.660727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.660963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.661169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.661337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.661390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.661552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.661719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.661761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.661936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.662229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.662618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.662868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.663087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.663275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.663302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.663482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.663631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.663673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.663820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.663982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.664006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.664182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.664363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.664392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.664539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.664697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.664722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.664858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.665264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.665680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.665862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.666029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.666411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.666762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.666945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.667122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.667401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.667453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.667658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.667811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.667835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.668039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.668454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.668815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.668984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.669175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.669444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.657 [2024-07-23 01:51:25.669493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.657 qpair failed and we were unable to recover it. 00:30:12.657 [2024-07-23 01:51:25.669708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.669926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.669979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.670195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.670373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.670399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.670609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.670805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.670829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.671024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.671211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.671238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.671424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.671555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.671580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.671783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.671976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.672000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.672142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.672335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.672362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.672537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.672758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.672783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.672967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.673276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.673333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.673524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.673717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.673742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.673902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.674089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.674116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.674513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.674760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.674785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.674938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.675168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.675195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.675371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.675574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.675601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.675777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.676024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.676074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.676272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.676452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.676481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.676702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.676998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.677047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.677226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.677409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.677436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.677652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.677815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.677838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.677976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.678186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.678213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.678400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.678605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.678638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.678854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.679293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.658 [2024-07-23 01:51:25.679648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.658 [2024-07-23 01:51:25.679865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.658 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.680129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.680425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.680474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.680634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.680814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.680840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.680985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.681189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.681215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.681396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.681530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.681576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.681765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.681979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.682035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.682253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.682420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.682461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.682639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.682795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.682821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.682962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.683405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.683759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.683941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.684146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.684380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.684430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.684609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.684769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.684813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.684996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.685309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.685364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.685553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.685739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.685765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.685936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.686263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.686653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.686842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.687011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.687356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.687704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.687867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.688167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.688484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.688531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.688743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.688927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.688954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.689112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.689269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.689296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.689501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.689646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.689686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.689882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.690237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.690608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.690803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.659 qpair failed and we were unable to recover it. 00:30:12.659 [2024-07-23 01:51:25.690967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.691190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.659 [2024-07-23 01:51:25.691238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.691484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.691699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.691724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.691879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.692255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.692644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.692810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.692978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.693346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.693740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.693951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.694102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.694231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.694255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.694430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.694619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.694647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.694829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.695270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.695623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.695826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.696013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.696246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.696297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.696468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.696679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.696704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.696867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.697250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.697635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.697811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.698000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.698214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.698238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.698412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.698549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.698590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.698780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.698938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.699000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.699300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.699483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.699510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.699699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.699866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.699891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.700061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.700228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.700253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.700452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.700639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.700681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.700843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.701240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.701623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.701806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.660 qpair failed and we were unable to recover it. 00:30:12.660 [2024-07-23 01:51:25.701950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.660 [2024-07-23 01:51:25.702137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.702199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.702444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.702712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.702741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.702890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.703272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.703686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.703874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.704136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.704300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.704324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.704545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.704736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.704761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.704928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.705348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.705748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.705938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.706108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.706272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.706296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.706460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.706637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.706664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.706890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.707182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.707240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.707551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.707775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.707799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.707965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.708287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.708644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.708854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.709081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.709289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.709345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.709554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.709724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.709749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.709925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.710148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.710203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.710397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.710583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.710610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.710835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.711238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.711722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.711883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.712070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.712279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.712344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.712524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.712719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.712744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.712910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.713106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.713132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.713351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.713543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.713570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.661 qpair failed and we were unable to recover it. 00:30:12.661 [2024-07-23 01:51:25.713745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.661 [2024-07-23 01:51:25.713934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.713975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.714229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.714401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.714463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.714682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.714880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.714904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.715059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.715400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.715798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.715985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.716171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.716478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.716532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.716724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.716871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.716910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.717101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.717323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.717382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.717571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.717746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.717771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.717952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.718371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.718771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.662 [2024-07-23 01:51:25.718963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.662 qpair failed and we were unable to recover it. 00:30:12.662 [2024-07-23 01:51:25.719299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.719507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.719533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.719709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.719883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.719910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.720098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.720440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.720772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.720983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.721131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.721306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.721334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.721484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.721695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.721721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.721888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.722244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.722657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.722865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.723050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.723266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.723293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.723507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.723665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.723693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.723902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.724256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.724677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.724883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.725068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.725293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.725340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.938 qpair failed and we were unable to recover it. 00:30:12.938 [2024-07-23 01:51:25.725553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.938 [2024-07-23 01:51:25.725721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.725762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.725921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.726281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.726745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.726979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.727280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.727488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.727514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.727696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.727905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.727955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.728173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.728353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.728380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.728567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.728740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.728781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.728964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.729241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.729292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.729482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.729644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.729685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.729839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.730272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.730656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.730898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.731055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.731217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.731241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.731461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.731645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.731673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.731885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.732327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.732741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.732946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.733117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.733308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.733332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.733499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.733715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.733743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.733916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.734301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.734669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.734879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.735036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.735270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.735320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.735500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.735709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.735738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.735895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.736202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.736255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.736470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.736611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.736642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.939 qpair failed and we were unable to recover it. 00:30:12.939 [2024-07-23 01:51:25.736834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.736986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.939 [2024-07-23 01:51:25.737013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.737246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.737457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.737482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.737642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.737805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.737845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.738027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.738314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.738364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.738557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.738720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.738744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.738951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.739163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.739189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.739369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.739547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.739574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.739772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.739957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.740026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.740220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.740424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.740474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.740636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.740844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.740869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.741035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.741237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.741298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.741479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.741684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.741712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.741895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.742243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.742610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.742802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.742981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.743244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.743298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.743488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.743637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.743664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.743858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.744166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.744218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.744431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.744636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.744664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.744846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.745169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.745628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.745798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.746005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.746177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.746202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.746397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.746580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.746608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.746807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.746994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.747018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.747200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.747408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.747435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.747592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.747791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.747819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.748030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.748249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.748298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.940 qpair failed and we were unable to recover it. 00:30:12.940 [2024-07-23 01:51:25.748519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.940 [2024-07-23 01:51:25.748708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.748736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.748931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.749358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.749683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.749924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.750088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.750249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.750287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.750463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.750644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.750685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.750828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.750999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.751039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.751215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.751418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.751444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.751667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.751834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.751861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.752125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.752294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.752320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.752522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.752738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.752762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.752895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.753259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.753569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.753779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.753910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.754306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.754692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.754944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.755135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.755273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.755297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.755455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.755637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.755665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.755838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.756001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.756027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.756232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.756441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.756530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.756790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.757102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.757154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.757312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.757487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.757514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.757705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.757938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.758000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.758214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.758449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.758509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.758668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.758943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.758995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.759175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.759317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.759345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.941 [2024-07-23 01:51:25.759525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.759809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.941 [2024-07-23 01:51:25.759866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.941 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.760054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.760193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.760234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.760516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.760759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.760809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.760993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.761182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.761223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.761407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.761592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.761626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.761819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.761999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.762026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.762188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.762369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.762396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.762553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.762734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.762761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.762945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.763308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.763705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.763886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.764067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.764253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.764277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.764413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.764578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.764602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.764813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.764978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.765002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.765212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.765369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.765408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.765623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.765785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.765826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.766005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.766307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.766357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.766543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.766755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.766781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.766976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.767374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.767787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.767986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.768153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.768431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.768491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.768681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.768884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.768922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.769076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.769296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.769320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.769461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.769659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.769703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.769897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.770343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.770764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.942 [2024-07-23 01:51:25.770927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.942 qpair failed and we were unable to recover it. 00:30:12.942 [2024-07-23 01:51:25.771095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.771287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.771311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.771509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.771714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.771741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.771889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.772259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.772588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.772813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.772999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.773333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.773772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.773951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.774131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.774314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.774342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.774537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.774692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.774733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.774919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.775126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.775153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.775344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.775502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.775544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.775737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.775956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.776007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.776235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.776419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.776476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.776670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.776832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.776857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.777036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.777302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.777350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.777556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.777761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.777793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.777954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.778386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.778747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.778958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.779104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.779285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.779312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.779503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.779667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.779693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.943 qpair failed and we were unable to recover it. 00:30:12.943 [2024-07-23 01:51:25.779855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.943 [2024-07-23 01:51:25.780064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.780088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.780270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.780467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.780492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.780661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.780856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.780880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.781047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.781262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.781289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.781469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.781742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.781767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.781938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.782284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.782681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.782909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.783066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.783269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.783296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.783508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.783698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.783725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.783890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.784306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.784734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.784942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.785100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.785287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.785313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.785481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.785704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.785732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.785922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.786244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.786688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.786847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.787039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.787305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.787333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.787517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.787688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.787715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.787872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.788207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.788598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.788795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.788962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.789122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.789163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.789429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.789618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.789646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.789820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.790015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.790040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.790218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.790507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.790563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.944 [2024-07-23 01:51:25.790786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.790974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.944 [2024-07-23 01:51:25.791001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.944 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.791214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.791426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.791453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.791649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.791824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.791849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.792046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.792231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.792258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.792413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.792641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.792669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.792855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.793248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.793687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.793890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.794074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.794268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.794311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.794490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.794651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.794679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.794862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.795206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.795581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.795800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.795963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.796332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.796685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.796900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.797063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.797273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.797323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.797545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.797757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.797782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.797975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.798352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.798775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.798989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.799196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.799399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.799426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.799601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.799765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.799792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.799961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.800316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.800698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.800906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.801113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.801269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.801298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.801501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.801673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.801701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.945 qpair failed and we were unable to recover it. 00:30:12.945 [2024-07-23 01:51:25.801865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.945 [2024-07-23 01:51:25.802005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.802029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.802229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.802462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.802488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.802704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.802858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.802885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.803064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.803418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.803762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.803973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.804144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.804426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.804488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.804673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.804820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.804859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.805027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.805376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.805738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.805956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.806123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.806306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.806333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.806541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.806723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.806753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.806937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.807259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.807647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.807879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.808080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.808239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.808278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.808485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.808635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.808663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.808847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.809291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.809656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.809841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.810017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.810388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.810736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.810923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.811078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.811258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.811285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.811470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.811712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.811737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.811943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.812168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.812218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.812408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.812597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.946 [2024-07-23 01:51:25.812627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.946 qpair failed and we were unable to recover it. 00:30:12.946 [2024-07-23 01:51:25.812772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.812948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.812975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.813157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.813446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.813503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.813667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.813876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.813903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.814087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.814282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.814338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.814530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.814716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.814741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.815062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.815372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.815423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.815605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.815833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.815857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.815997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.816201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.816228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.816448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.816635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.816666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.816858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.817172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.817727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.817919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.818083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.818257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.818284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.818488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.818668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.818696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.818878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.819237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.819592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.819810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.820039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.820332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.820380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.820571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.820710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.820735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.820917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.821282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.821663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.821871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.822016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.822362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.822730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.822902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.823080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.823236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.823265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.947 qpair failed and we were unable to recover it. 00:30:12.947 [2024-07-23 01:51:25.823423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.823582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.947 [2024-07-23 01:51:25.823610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.823835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.823999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.824023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.824190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.824357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.824420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.824640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.824821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.824848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.825040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.825437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.825805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.825987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.826177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.826343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.826367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.826565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.826764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.826792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.826982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.827336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.827739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.827964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.828107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.828271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.828298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.828479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.828662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.828689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.828839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.829190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.829575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.829803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.829971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.830286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.830655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.830838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.831045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.831315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.831364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.831580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.831768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.831798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.831981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.832354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.832736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.948 [2024-07-23 01:51:25.832973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.948 qpair failed and we were unable to recover it. 00:30:12.948 [2024-07-23 01:51:25.833133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.833300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.833324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.833493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.833659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.833703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.833864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.834245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.834668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.834874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.835056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.835200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.835242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.835423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.835641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.835669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.835841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.836240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.836626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.836862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.837136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.837393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.837442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.837626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.837779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.837806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.838019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.838378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.838778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.838960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.839140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.839373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.839424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.839593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.839776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.839803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.839995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.840198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.840225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.840522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.840739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.840766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.840957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.841370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.841743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.841935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.842075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.842265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.842289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.842495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.842708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.842735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.842921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.843305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.843728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.843910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.949 [2024-07-23 01:51:25.844103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.844361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.949 [2024-07-23 01:51:25.844388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.949 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.844573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.844750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.844774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.844914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.845342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.845744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.845954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.846170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.846347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.846376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.846556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.846736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.846762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.846933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.847275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.847331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.847512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.847686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.847714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.847855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.848262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.848664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.848852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.849013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.849169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.849198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.849406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.849585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.849622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.849803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.849979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.850007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.850196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.850385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.850425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.850584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.850781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.850809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.851014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.851402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.851759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.851966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.852121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.852290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.852323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.852527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.852748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.852777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.852977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.853273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.853328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.853504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.853716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.853742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.853885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.854356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.854731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.854902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.855063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.855229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.855286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.950 qpair failed and we were unable to recover it. 00:30:12.950 [2024-07-23 01:51:25.855502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.855705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.950 [2024-07-23 01:51:25.855731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.855909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.856375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.856768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.856933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.857139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.857367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.857414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.857598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.857771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.857799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.857989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.858169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.858214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.858413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.858636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.858675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.858869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.859268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.859697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.859893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.860052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.860249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.860293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.860470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.860637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.860670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.860839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.861270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.861681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.861871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.862070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.862246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.862287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.862489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.862648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.862688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.862883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.863224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.863597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.863804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.863975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.864314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.864696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.864918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.865061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.865367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.865687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.865888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.866034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.866199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.866225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.951 qpair failed and we were unable to recover it. 00:30:12.951 [2024-07-23 01:51:25.866418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.951 [2024-07-23 01:51:25.866558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.866583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.866777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.866914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.866940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.867083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.867280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.867306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.867446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.867629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.867656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.867810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.868185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.868521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.868714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.868859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.869258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.869605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.869839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.870007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.870325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.870671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.870899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.871091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.871242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.871269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.871440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.871629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.871667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.871837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.872182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.872588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.872784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.872956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.873315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.873672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.873883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.874042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.874396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.952 [2024-07-23 01:51:25.874783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.952 [2024-07-23 01:51:25.874985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.952 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.875149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.875314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.875340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.875531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.875696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.875722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.875867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.875997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.876024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.876211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.876372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.876397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.876588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.876749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.876776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.876919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.877279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.877595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.877771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.877913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.878272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.878572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.878756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.878901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.879278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.879611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.879783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.879948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.880310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.880646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.880845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.881036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.881423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.881783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.881998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.882156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.882295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.882321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.882460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.882649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.882676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.882868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.883251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.883640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.883802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.883961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.884151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.884177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.884321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.884512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.884538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.953 qpair failed and we were unable to recover it. 00:30:12.953 [2024-07-23 01:51:25.884692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.953 [2024-07-23 01:51:25.884855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.884881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.885067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.885206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.885233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.885425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.885578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.885603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.885802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.885990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.886016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.886182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.886346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.886372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.886538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.886708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.886734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.886896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.887284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.887598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.887777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.887925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.888311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.888677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.888872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.889009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.889334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.889710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.889907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.890075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.890458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.890824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.890993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.891153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.891347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.891373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.891546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.891712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.891739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.891931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.892293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.892661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.892826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.892992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.893396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.893775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.893934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.894098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.894340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.894393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.894577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.894740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.954 [2024-07-23 01:51:25.894767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.954 qpair failed and we were unable to recover it. 00:30:12.954 [2024-07-23 01:51:25.894986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.895219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.895245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.895466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.895695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.895722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.895848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.896326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.896654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.896818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.896978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.897198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.897228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.897420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.897630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.897656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.897797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.898261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.898630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.898813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.899001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.899274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.899325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.899541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.899700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.899727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.899900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.900272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.900649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.900845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.901037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.901265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.901307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.901493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.901699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.901726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.901871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.902329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.902762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.902951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.903117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.903333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.903362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.903542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.903679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.903706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.903924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.904332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.904688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.904859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.905044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.905226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.905255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.905408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.905595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.905627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.955 [2024-07-23 01:51:25.905791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.906062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.955 [2024-07-23 01:51:25.906111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.955 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.906475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.906687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.906714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.906852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.907246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.907603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.907809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.907954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.908361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.908753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.908926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.909086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.909422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.909729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.909921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.910060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.910384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.910789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.910949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.911109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.911476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.911794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.911988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.912152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.912324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.912351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.912488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.912644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.912671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.912841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.912983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.913009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.913177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.913342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.913368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.913527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.913693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.913721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.913887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.914207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.914581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.914756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.914921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.915474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.915829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.915984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.956 qpair failed and we were unable to recover it. 00:30:12.956 [2024-07-23 01:51:25.916119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.916254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.956 [2024-07-23 01:51:25.916280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.916421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.916665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.916692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.916864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.917307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.917673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.917872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.918052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.918439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.918804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.918987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.919131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.919262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.919288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.919497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.919675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.919705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.919885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.920248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.920644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.920838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.921064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.921471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.921802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.921968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.922231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.922408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.922438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.922641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.922792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.922819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.922972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.923316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.923698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.923910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.924073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.924263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.924293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.924506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.924684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.924713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.924886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.925064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.925092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.957 qpair failed and we were unable to recover it. 00:30:12.957 [2024-07-23 01:51:25.925306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.957 [2024-07-23 01:51:25.925516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.925543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.925741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.925917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.925945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.926117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.926292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.926319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.926476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.926663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.926689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.926857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.927209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.927515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.927725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.927851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.928245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.928638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.928881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.929040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.929230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.929259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.929464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.929686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.929712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.929859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.930053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.930081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.930427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.930633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.930680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.930828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.931214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.931714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.931909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.932077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.932276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.932320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.932519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.932678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.932705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.932891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.933248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.933627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.933806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.933981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.934229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.934289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.934509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.934678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.934704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.934852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.935246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.935670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.935883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.936083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.936270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.958 [2024-07-23 01:51:25.936296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.958 qpair failed and we were unable to recover it. 00:30:12.958 [2024-07-23 01:51:25.936483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.936699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.936726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.936881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.937256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.937657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.937873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.938115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.938272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.938317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.938527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.938745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.938772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.938923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.939361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.939754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.939969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.940327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.940573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.940602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.940820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.941294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.941761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.941988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.942171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.942395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.942440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.942594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.942792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.942818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.943011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.943202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.943246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.943599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.943811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.943838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.944023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.944293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.944347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.944537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.944727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.944753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.944898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.945302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.945694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.945872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.946060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.946275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.946302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.946464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.946629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.946673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.946839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.947200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.947534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.947754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.959 [2024-07-23 01:51:25.947908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.948132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.959 [2024-07-23 01:51:25.948178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.959 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.948417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.948631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.948675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.948822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.949212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.949561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.949752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.949967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.950366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.950788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.950953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.951094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.951303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.951332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.951527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.951734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.951760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.951930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.952283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.952334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.952545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.952743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.952769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.952955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.953208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.953271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.953451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.953630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.953658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.953822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.954223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.954676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.954905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.955124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.955267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.955295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.955432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.955625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.955670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.955833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.956282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.956740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.956987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.957190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.957415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.957443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.957598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.957769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.957798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.958006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.958260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.958289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.958469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.958678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.958705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.958845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.959270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.960 qpair failed and we were unable to recover it. 00:30:12.960 [2024-07-23 01:51:25.959731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.960 [2024-07-23 01:51:25.959950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.960158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.960562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.960607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.960775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.961286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.961712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.961903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.962067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.962224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.962250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.962420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.962652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.962699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.962839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.963240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.963711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.963880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.964120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.964318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.964343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.964513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.964707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.964736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.964868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.965304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.965636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.965885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.966073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.966218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.966260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.966473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.966729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.966767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.966964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.967183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.967213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.967416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.967595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.967633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.967843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.968231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.968649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.968841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.969023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.969246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.969291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.969499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.969686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.969716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.969896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.970256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.970692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.970883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.971090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.971314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.971340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.961 qpair failed and we were unable to recover it. 00:30:12.961 [2024-07-23 01:51:25.971530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.961 [2024-07-23 01:51:25.971752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.971779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.971943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.972241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.972709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.972915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.973106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.973313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.973369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.973558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.973717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.973743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.974012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.974298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.974327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.974509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.974701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.974729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.974895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.975218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.975273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.975489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.975707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.975734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.975918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.976351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.976723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.976891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.977059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.977203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.977231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.977446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.977639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.977667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.977816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.978259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.978757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.978998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.979184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.979416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.979467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.979717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.979858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.979885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.980085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.980344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.980395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.980577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.980779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.980807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.980985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.981143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.981171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.962 qpair failed and we were unable to recover it. 00:30:12.962 [2024-07-23 01:51:25.981356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.962 [2024-07-23 01:51:25.981530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.981559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.981736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.981943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.981995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.982214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.982426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.982477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.982694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.982862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.982888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.983080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.983250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.983294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.983477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.983728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.983755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.983901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.984326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.984729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.984926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.985131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.985270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.985297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.985523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.985715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.985743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.985930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.986369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.986727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.986910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.987097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.987308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.987338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.987505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.987708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.987735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.987901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.988288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.988678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.988872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.989041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.989404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.989463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.989675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.989858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.989888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.990068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.990235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.990279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.990465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.990648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.990692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.990857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.991172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.991199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.991353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.991514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.991557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.991772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.991988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.992051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.992359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.992566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.992595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.992792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.992983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.963 [2024-07-23 01:51:25.993043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.963 qpair failed and we were unable to recover it. 00:30:12.963 [2024-07-23 01:51:25.993274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.993421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.993450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.993659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.993870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.993900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.994083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.994263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.994292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.994475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.994660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.994687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.994877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.995181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.995235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.995445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.995622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.995651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.995836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.995993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.996022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.996204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.996412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.996441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.996649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.996832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.996858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.997034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.997195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.997222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.997402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.997594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.997649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.997855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.998308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.998714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.998913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.999049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.999230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.999258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.999465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.999669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:25.999696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:25.999889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.000202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.000260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.000450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.000677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.000704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.000900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.001182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.001236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.001486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.001676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.001707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.001893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.002304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.002740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.002982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.003175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.003351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.003418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.003629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.003815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.003841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.004066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.004231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.004256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.004537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.004721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.004747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.964 [2024-07-23 01:51:26.004891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.005173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.964 [2024-07-23 01:51:26.005226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.964 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.005424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.005638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.005665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.005870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.006399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.006779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.006971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.007175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.007496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.007559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.007775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.007946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.007972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.008281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.008479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.008508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.008696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.008885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.008928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.009112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.009306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.009349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.009506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.009691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.009718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.009885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.010330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.010721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.010960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.011133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.011350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.011400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.011660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.011851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.011877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.012168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.012352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.012380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.012585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.012763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.012790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.012971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.013173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.013234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.013595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.013802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.013828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.013968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.014291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.014346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.014530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.014715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.014745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.014927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.015333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.015744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.015942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.016127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.016336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.016364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.016573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.016757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.016784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.016920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.017100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.017129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.965 qpair failed and we were unable to recover it. 00:30:12.965 [2024-07-23 01:51:26.017347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.965 [2024-07-23 01:51:26.017519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.017547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.966 qpair failed and we were unable to recover it. 00:30:12.966 [2024-07-23 01:51:26.017713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.017901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.017927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.966 qpair failed and we were unable to recover it. 00:30:12.966 [2024-07-23 01:51:26.018113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.018298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.018327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.966 qpair failed and we were unable to recover it. 00:30:12.966 [2024-07-23 01:51:26.018526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.018720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.018747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.966 qpair failed and we were unable to recover it. 00:30:12.966 [2024-07-23 01:51:26.018888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.019038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:12.966 [2024-07-23 01:51:26.019065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:12.966 qpair failed and we were unable to recover it. 00:30:12.966 [2024-07-23 01:51:26.019262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.019460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.019489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.019674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.019849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.019879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.020029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.020222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.020295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.020482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.020674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.020700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.020855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.021257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.021680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.021846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.022014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.022173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.022204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.022396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.022576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.022605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.022801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.022985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.023014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.023230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.023547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.023611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.023835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.024306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.024708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.024926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.025091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.025280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.025308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.286 qpair failed and we were unable to recover it. 00:30:13.286 [2024-07-23 01:51:26.025499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.286 [2024-07-23 01:51:26.025715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.025742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.025930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.026139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.026168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.026376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.026551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.026579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.026794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.026949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.027006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.027393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.027624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.027654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.027826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.028244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.028682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.028869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.029077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.029210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.029236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.029443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.029606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.029639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.029827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.030243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.030667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.030837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.031005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.031187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.031216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.031411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.031597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.031642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.031838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.032225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.032662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.032855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.033049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.033219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.033263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.033480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.033677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.033730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.034037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.034332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.034385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.034571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.034756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.034786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.034985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.035295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.035652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.035823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.036021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.036187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.036216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.036406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.036573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.036625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.036807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.037365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.037777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.037946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.038168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.038506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.038559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.038750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.038998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.039055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.039333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.039516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.039542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.039760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.039969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.040019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.040230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.040503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.040556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.040747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.040883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.040910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.041073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.041291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.041318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.041545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.041727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.041758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.287 qpair failed and we were unable to recover it. 00:30:13.287 [2024-07-23 01:51:26.041937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.287 [2024-07-23 01:51:26.042119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.042145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.042313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.042449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.042477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.042676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.042857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.042886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.043096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.043368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.043418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.043597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.043795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.043822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.044011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.044178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.044204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.044433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.044667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.044697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.044852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.045214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.045693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.045912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.046069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.046241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.046285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.046476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.046706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.046732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.046923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.047303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.047692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.047865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.048056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.048274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.048326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.048540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.048725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.048756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.048943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.049130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.049160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.049421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.049637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.049667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.049848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.050020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.050049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.050228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.050484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.050543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.050765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.051244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.051643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.051885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.052067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.052281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.052310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.052467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.052601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.052633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.052824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.053264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.053687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.053898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.054071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.054238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.054281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.054473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.054637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.054664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.054827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.055325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.055688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.055854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.056041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.056404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.056459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.056669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.056854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.056884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.057088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.057397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.057456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.288 [2024-07-23 01:51:26.057669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.057814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.288 [2024-07-23 01:51:26.057842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.288 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.058029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.058347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.058399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.058619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.058832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.058861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.059076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.059258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.059287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.059444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.059603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.059637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.059822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.060259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.060781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.060997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.061184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.061366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.061394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.061600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.061800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.061830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.062010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.062192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.062218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.062386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.062589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.062628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.062804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.063214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.063627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.063849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.064049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.064407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.064731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.064890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.065077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.065264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.065292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.065495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.065639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.065668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.065855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.066203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.066670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.066892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.067037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.067410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.067759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.067967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.068109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.068317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.068345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.068536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.068730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.068756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.068924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.069313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.069682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.069917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.070133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.070358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.070411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.070651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.070864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.070907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.071135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.071302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.071330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.071502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.071640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.071667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.289 [2024-07-23 01:51:26.071815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.071998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.289 [2024-07-23 01:51:26.072026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.289 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.072218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.072412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.072441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.072651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.072860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.072890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.073038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.073289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.073341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.073527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.073706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.073736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.073924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.074163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.074225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.074577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.074829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.074858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.075038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.075332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.075389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.075577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.075758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.075788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.075949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.076142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.076186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.076509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.076761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.076790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.076989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.077158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.077202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.077415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.077564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.077592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.077787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.077994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.078023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.078329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.078582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.078608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.078787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.078948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.078977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.079154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.079374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.079401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.079582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.079749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.079779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.079936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.080396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.080775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.080990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.081153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.081317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.081360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.081508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.081744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.081774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.081963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.082146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.082175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.290 qpair failed and we were unable to recover it. 00:30:13.290 [2024-07-23 01:51:26.082327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.082505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.290 [2024-07-23 01:51:26.082533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.082709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.082856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.082899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.083083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.083407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.083461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.083668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.083852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.083883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.084093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.084403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.084475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.084665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.084879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.084908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.085187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.085400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.085429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.085636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.085821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.085850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.086042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.086229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.086258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.086442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.086624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.086654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.086865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.087207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.087257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.087460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.087643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.087671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.087837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.088214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.088719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.088897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.089062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.089219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.089248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.089453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.089645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.089675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.089873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.090097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.090150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.090459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.090695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.090725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.090911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.091263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.091663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.091877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.092059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.092272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.092298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.092509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.092665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.092695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.092854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.093280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.093696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.093882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.094066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.094268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.094297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.094484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.094689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.094718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.094878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.095225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.095608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.095792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.095967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.096147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.096178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.096360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.096530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.096573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.096768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.096981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.097008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.097180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.097489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.097545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.097728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.097978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.098029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.291 [2024-07-23 01:51:26.098229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.098360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.291 [2024-07-23 01:51:26.098403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.291 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.098580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.098750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.098777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.098946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.099262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.099317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.099531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.099741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.099770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.099952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.100157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.100210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.100438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.100603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.100653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.100834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.101243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.101652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.101876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.102070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.102249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.102275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.102480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.102689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.102716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.102886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.103168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.103220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.103432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.103622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.103651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.103827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.103993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.104034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.104219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.104402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.104431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.104619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.104830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.104857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.105046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.105188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.105214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.105406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.105569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.105598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.105791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.106051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.106111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.106291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.106580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.106649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.106871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.107086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.107115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.107299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.107626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.107673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.107884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.108072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.108100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.108288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.108601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.108663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.108856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.109277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.109658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.109904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.110093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.110368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.110419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.110605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.110803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.110832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.111143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.111546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.111597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.111833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.112245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.112725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.112899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.113036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.113221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.113264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.113447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.113633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.113663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.113845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.114273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.114627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.114853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.115033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.115198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.115224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.115407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.115624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.115654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.115846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.116348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.116798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.116971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.117189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.117403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.117429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.117598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.117800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.117830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.118098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.118328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.118354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.118536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.118691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.118721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.118865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.119312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.292 [2024-07-23 01:51:26.119708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.292 [2024-07-23 01:51:26.119947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.292 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.120159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.120439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.120468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.120662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.120826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.120867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.121052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.121258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.121287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.121505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.121641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.121668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.121879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.122324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.122751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.122960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.123163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.123360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.123386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.123550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.123744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.123771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.123955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.124316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.124732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.124949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.125127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.125334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.125363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.125574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.125746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.125776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.125940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.126081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.126123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.126445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.126691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.126718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.126878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.127241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.127683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.127919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.128255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.128502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.128532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.128712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.128926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.128953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.129140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.129324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.129378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.129570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.129741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.129768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.129950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.130389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.130793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.130974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.131163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.131301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.131342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.131536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.131728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.131757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.131904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.132289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.132656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.132863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.133148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.133540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.133598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.133802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.134213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.134623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.134815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.134951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.135119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.135144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.135345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.135528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.135556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.135746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.136000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.136054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.136265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.136580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.136641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.136833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.136999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.137028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.137240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.137500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.137555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.137767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.137949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.137978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.138169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.138355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.138381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.138590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.138814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.138843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.139059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.139378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.139430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.139657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.139864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.139892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.140109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.140377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.293 [2024-07-23 01:51:26.140417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.293 qpair failed and we were unable to recover it. 00:30:13.293 [2024-07-23 01:51:26.140644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.140822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.140850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.141058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.141241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.141269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.141454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.141639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.141668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.141852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.142192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.142241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.142609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.142846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.142885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.143069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.143302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.143353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.143563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.143715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.143743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.143923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.144299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.144713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.144934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.145116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.145323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.145351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.145549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.145877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.145903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.146177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.146434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.146462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.146679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.146880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.146908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.147057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.147237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.147264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.147437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.147602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.147636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.147838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.148318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.148709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.148920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.149079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.149287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.149331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.149520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.149696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.149738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.149952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.150249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.150310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.150495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.150633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.150662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.150850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.151151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.151179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.151365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.151580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.151608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.151797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.151979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.152004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.152230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.152395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.152435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.152681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.152854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.152880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.153075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.153415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.153478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.153640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.153836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.153876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.154095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.154397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.154448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.154648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.154805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.154833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.155037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.155388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.155437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.155622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.155807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.155836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.156050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.156263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.156291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.156504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.156642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.156668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.156851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.157209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.157260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.157445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.157652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.157680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.157834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.158014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.158042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.158260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.158449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.158474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.158666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.159281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.159689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.159893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.160074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.160275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.160303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.160526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.160687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.160716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.160867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.161227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.161597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.161829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.162143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.162462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.162514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.162740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.162900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.162940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.163169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.163332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.163371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.163546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.163717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.294 [2024-07-23 01:51:26.163745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.294 qpair failed and we were unable to recover it. 00:30:13.294 [2024-07-23 01:51:26.163926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.164317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.164717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.164962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.165167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.165433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.165460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.165742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.165926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.165955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.166147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.166424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.166450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.166679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.166845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.166885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.167038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.167175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.167215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.167465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.167695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.167721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.167944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.168321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.168700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.168888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.169042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.169409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.169781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.169988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.170162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.170421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.170445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.170658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.170817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.170846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.171054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.171251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.171276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.171487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.171667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.171696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.171911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.172276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.172725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.172964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.173118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.173326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.173354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.173564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.173740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.173766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.173894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.174204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.174232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.174432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.174596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.174703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.174933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.175099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.175139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.175311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.175560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.175588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.175847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.176011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.176041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.176264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.176547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.176612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.176802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.176985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.177013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.177226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.177568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.177647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.177835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.178102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.178155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.178377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.178559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.178587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.178814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.179288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.179736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.179940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.180099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.180289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.180330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.180479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.180641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.180671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.180850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.180993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.181020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.181164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.181450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.181500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.181687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.181870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.181898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.182091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.182256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.182297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.182461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.182671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.182700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.182884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.183180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.183237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.183447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.183630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.183659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.183870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.184211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.184610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.184831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.185037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.185324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.185375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.185582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.185778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.185807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.295 qpair failed and we were unable to recover it. 00:30:13.295 [2024-07-23 01:51:26.186018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.295 [2024-07-23 01:51:26.186240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.186265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.186436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.186630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.186660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.186880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.187370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.187754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.187947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.188198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.188535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.188586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.188787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.188974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.189000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.189180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.189374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.189436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.189625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.189809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.189837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.190092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.190427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.190479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.190700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.190873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.190901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.191093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.191297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.191325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.191487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.191655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.191685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.191840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.192272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.192670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.192984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.193196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.193436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.193488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.193698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.193887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.193929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.194110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.194361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.194412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.194569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.194785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.194814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.195009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.195379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.195748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.195953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.196114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.196412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.196465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.196679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.196884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.196913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.197161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.197349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.197377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.197593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.197771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.197814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.197998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.198198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.198238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.198419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.198773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.198801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.199127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.199513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.199560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.199776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.199957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.199985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.200170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.200325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.200353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.200534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.200812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.200841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.201106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.201249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.201277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.201490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.201720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.201749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.201896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.202138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.202189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.202386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.202571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.202596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.202822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.203244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.203642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.203848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.204121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.204460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.204517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.204720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.204866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.204893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.205128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.205412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.205473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.205689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.205840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.205883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.206072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.206356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.206396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.206607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.206802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.206828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.207016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.207242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.207300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.207476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.207737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.207766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.207954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.208125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.208150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.208286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.208506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.296 [2024-07-23 01:51:26.208534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.296 qpair failed and we were unable to recover it. 00:30:13.296 [2024-07-23 01:51:26.208685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.208897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.208922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.209104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.209250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.209278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.209468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.209665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.209692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.209831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.210226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.210605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.210822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.210994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.211150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.211192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.211353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.211545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.211587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.211799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.212316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.212774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.212955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.213119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.213364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.213419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.213631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.213820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.213846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.214023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.214423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.214811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.214998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.215181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.215407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.215467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.215651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.215802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.215830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.216050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.216194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.216219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.216410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.216576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.216605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.216801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.217322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.217708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.217924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.218116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.218427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.218478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.218642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.218821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.218849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.218998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.219276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.219327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.219507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.219662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.219704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.219902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.220264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.220677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.220900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.221094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.221251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.221293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.221480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.221703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.221730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.221897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.222225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.222627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.222824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.222991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.223376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.223765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.223983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.224165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.224457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.224512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.224759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.224961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.225033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.225222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.225412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.225453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.225660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.225866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.225894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.226111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.226430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.226476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.226664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.226822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.226850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.227068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.227361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.227721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.227915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.228071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.228256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.228284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.228489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.228670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.228699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.228908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.229216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.229278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.229472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.229688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.229718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.229915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.230226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.230274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.297 qpair failed and we were unable to recover it. 00:30:13.297 [2024-07-23 01:51:26.230469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.297 [2024-07-23 01:51:26.230716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.230745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.230925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.231289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.231648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.231882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.232101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.232381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.232429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.232639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.232825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.232851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.233005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.233133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.233158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.233311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.233563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.233591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.233765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.233974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.234002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.234220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.234470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.234528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.234737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.234992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.235050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.235315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.235534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.235559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.235781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.235985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.236048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.236247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.236569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.236631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.236819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.237065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.237120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.237393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.237679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.237708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.237892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.238291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.238683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.238903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.239217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.239447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.239472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.239692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.239859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.239885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.240090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.240342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.240381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.240591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.240787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.240816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.241030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.241209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.241237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.241440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.241784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.241812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.242016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.242329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.242385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.242651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.242843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.242872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.243100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.243237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.243279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.243464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.243718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.243746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.243937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.244300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.244354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.244621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.244782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.244810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.244971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.245208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.245233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.245448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.245641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.245670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.245836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.246012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.246041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.246280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.246578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.246637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.246841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.247433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.247807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.247993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.248178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.248352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.248380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.248565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.248731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.248759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.248945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.249338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.249749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.249980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.250257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.250404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.250429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.250663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.250855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.250880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.251047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.251261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.251286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.251426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.251644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.251672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.251863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.252114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.252166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.298 [2024-07-23 01:51:26.252351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.252513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.298 [2024-07-23 01:51:26.252554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.298 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.252736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.252900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.252926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.253090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.253318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.253346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.253507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.253688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.253717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.253896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.254295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.254722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.254965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.255227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.255465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.255492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.255701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.255909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.255938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.256089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.256289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.256316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.256623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.256798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.256823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.256957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.257340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.257741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.257921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.258135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.258298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.258324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.258531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.258720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.258747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.258920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.259297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.259752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.259970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.260207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.260438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.260462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.260666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.260870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.260897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.261126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.261334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.261362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.261521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.261669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.261696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.261900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.262148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.262208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.262412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.262584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.262634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.262819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.263285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.263702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.263915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.264107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.264244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.264270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.264433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.264598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.264632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.264808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.264985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.265011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.265283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.265596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.265682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.265849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.266088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.266138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.266319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.266525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.266553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.266806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.267089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.267141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.267393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.267589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.267624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.267814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.268271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.268729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.268932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.269292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.269500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.269528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.269686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.269828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.269854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.270046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.270245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.270291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.270447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.270598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.270634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.270813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.270974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.271002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.271151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.271332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.271359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.271541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.271731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.271756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.271897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.272305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.272687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.272869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.273071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.273273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.273298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.273464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.273724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.273751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.273959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.274251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.274300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.274481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.274699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.274725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.274873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.275034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.299 [2024-07-23 01:51:26.275060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.299 qpair failed and we were unable to recover it. 00:30:13.299 [2024-07-23 01:51:26.275224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.275437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.275465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.275640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.275797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.275823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.276034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.276401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.276803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.276977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.277227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.277541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.277601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.277822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.278241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.278718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.278889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.279032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.279236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.279276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.279448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.279640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.279684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.279819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.280262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.280694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.280913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.281063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.281223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.281249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.281442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.281630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.281658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.281808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.282291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.282655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.282846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.283097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.283283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.283308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.283500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.283656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.283683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.283872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.284256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.284680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.284848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.285096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.285275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.285301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.285463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.285628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.285654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.285817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.286233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.286624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.286813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.286988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.287291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.287351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.287563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.287765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.287791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.287957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.288112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.288143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.288422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.288629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.288672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.288820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.289294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.289708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.289905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.290072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.290299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.290327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.290519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.290651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.290677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.290843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.290996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.291033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.291198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.291360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.291388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.291569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.291722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.291747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.291912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.292331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.292725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.292928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.293110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.293314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.293342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.293521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.293705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.293731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.293871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.294226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.294594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.294767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.294902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.295302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.295719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.295915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.300 qpair failed and we were unable to recover it. 00:30:13.300 [2024-07-23 01:51:26.296119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.300 [2024-07-23 01:51:26.296304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.296332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.296517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.296705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.296731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.296878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.297294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.297712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.297916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.298080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.298256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.298284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.298467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.298641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.298691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.298832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.299325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.299733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.299892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.300091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.300275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.300311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.300488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.300687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.300714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.300859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.300994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.301019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.301239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.301419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.301447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.301632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.301772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.301798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.301968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.302151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.302180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.302410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.302624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.302670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.302809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.302979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.303007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.303169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.303333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.303363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.303552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.303727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.303753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.303895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.304364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.304749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.304929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.305128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.305304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.305336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.305525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.305729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.305755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.305891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.306318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.306738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.306921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.307113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.307265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.307295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.307480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.307677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.307703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.307843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.308257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.308642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.308820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.309041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.309401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.309767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.309990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.310163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.310374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.310402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.310558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.310738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.310764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.310917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.311288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.311721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.311881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.312051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.312232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.312260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.312434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.312628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.312676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.312818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.312980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.313008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.313189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.313341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.313368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.313513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.313686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.313712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.313859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.314050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.314078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.314287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.314439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.314469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.301 qpair failed and we were unable to recover it. 00:30:13.301 [2024-07-23 01:51:26.314658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.301 [2024-07-23 01:51:26.314809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.314835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.315048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.315279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.315325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.315501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.315697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.315723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.315868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.316258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.316676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.316841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.317031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.317448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.317825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.317990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.318172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.318340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.318368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.318573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.318756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.318782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.318947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.319324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.319728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.319927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.320119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.320287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.320312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.320480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.320657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.320682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.320824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.320997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.321023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.321185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.321338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.321366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.321517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.321704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.321731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.321875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.322049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.322074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.322292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.322494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.302 [2024-07-23 01:51:26.322521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.302 qpair failed and we were unable to recover it. 00:30:13.302 [2024-07-23 01:51:26.322693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.323499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.323533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.323733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.323879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.323905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.324055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.324194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.324219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.324394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.324530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.324557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.324708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.325496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.325529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.325734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.325881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.325922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.326076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.326302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.326330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.326505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.326679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.326706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.326853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.326991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.327016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.327218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.327402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.327430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.327579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.327757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.327783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.327922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.328322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.328679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.328844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.328981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.329362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.329755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.329972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.598 qpair failed and we were unable to recover it. 00:30:13.598 [2024-07-23 01:51:26.330153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.598 [2024-07-23 01:51:26.330336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.330365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.330516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.330686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.330711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.330852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.331015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.331040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.331267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.332433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.332797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.332971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.333134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.333343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.333371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.333552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.333721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.333751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.333890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.334325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.334708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.334865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.335028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.335393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.335792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.335971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.336144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.336386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.336414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.336598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.336793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.336819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.336977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.337313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.337726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.337890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.338120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.338336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.338382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.338536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.338703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.338731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.338875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.339127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.339171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.339421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.339596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.339641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.339810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.339974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.340019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.340196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.340461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.340508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.340709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.340853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.340896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.341060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.341376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.341729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.341887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.342116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.342387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.342415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.342583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.342797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.342825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.342993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.343199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.343227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.343414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.343625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.343653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.343831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.344224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.344630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.344839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.345052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.345299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.345346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.345491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.345678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.345704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.345869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.346057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.346082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.599 qpair failed and we were unable to recover it. 00:30:13.599 [2024-07-23 01:51:26.346219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.599 [2024-07-23 01:51:26.346436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.346463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.346666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.346800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.346826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.347006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.347142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.347167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.347439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.347627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.347669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.347814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.348283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.348712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.348930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.349111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.349294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.349342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.349533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.349673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.349698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.349836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.350345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.350696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.350884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.351100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.351283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.351330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.351485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.351675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.351704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.351860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.352301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.352748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.352924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.353103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.353285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.353313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.353470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.353688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.353714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.353849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.354089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.354136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.354343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.354544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.354571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.354757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.354989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.355036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.355261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.355442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.355470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.355670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.355847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.355873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.356090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.356272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.356317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.356520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.356717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.356742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.356877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.357265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.357645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.357808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.357963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.358117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.358145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.358341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.358560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.358586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.358734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.359495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.359528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.359722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.359859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.359886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.360155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.360356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.360385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.360568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.360749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.360776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.360936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.361152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.361198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.361418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.361669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.361695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.361836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.362248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.600 qpair failed and we were unable to recover it. 00:30:13.600 [2024-07-23 01:51:26.362632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.600 [2024-07-23 01:51:26.362816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.363017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.363430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.363796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.363999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.364262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.364502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.364551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.364733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.365450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.365482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.365708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.365848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.365873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.366059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.366430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.366784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.366962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.367172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.367405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.367432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.367629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.367813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.367838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.367992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.368364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.368800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.368993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.369173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.369364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.369411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.369611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.369784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.369809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.369975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.370361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.370700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.370862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.371063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.371421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.371770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.371964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.372138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.372291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.372321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.372504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.372707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.372735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.372907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.373272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.373640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.373823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.374027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.374189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.374234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.374456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.374654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.374679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.374822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.374984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.375013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.375227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.375467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.375503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.375729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.375899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.375943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.376104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.376295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.601 [2024-07-23 01:51:26.376329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.601 qpair failed and we were unable to recover it. 00:30:13.601 [2024-07-23 01:51:26.376544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.376739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.376766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.376930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.377425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.377810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.377996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.378178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.378365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.378393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.378579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.378747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.378773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.378934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.379344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.379747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.379931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.380137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.380366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.380412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.380637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.380779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.380804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.380932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.381409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.381768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.381938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.382141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.382426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.382473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.382694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.382856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.382882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.383048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.383399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.383736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.383925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.384135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.384341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.384385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.384569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.384752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.384779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.384926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.385253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.385674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.385836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.386007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.386369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.386738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.386934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.387124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.387300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.387346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.387517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.387713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.387743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.387881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.388336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.388666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.388818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.388989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.389334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.389682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.389840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.390029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.390457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.390796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.602 [2024-07-23 01:51:26.390963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.602 qpair failed and we were unable to recover it. 00:30:13.602 [2024-07-23 01:51:26.391195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.391417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.391467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.391664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.391806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.391831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.392005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.392422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.392768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.392942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.393082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.393301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.393330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.393581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.393744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.393770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.393906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.394335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.394719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.394883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.395078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.395250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.395298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.395582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.395752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.395778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.395925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.396138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.396166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.396415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.396588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.396626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.396776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.396952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.397009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.397201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.397452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.397497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.397693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.397830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.397855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.398027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.398206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.398235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.398389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.398596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.398640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.398803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.399244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.399711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.399872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.400075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.400247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.400296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.400519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.400679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.400705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.400852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.401245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.401669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.401839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.401988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.402336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.402711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.402892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.403062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.403226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.403266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.403517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.403722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.403748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.403894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.404378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.404775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.404933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.405068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.405490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.405838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.405999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.406166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.406330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.406355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.603 qpair failed and we were unable to recover it. 00:30:13.603 [2024-07-23 01:51:26.406523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.406689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.603 [2024-07-23 01:51:26.406716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.406907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.407146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.407191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.407400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.407596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.407643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.407831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.408213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.408533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.408753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.408934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.409122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.409147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.409310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.409535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.409560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.409759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.409968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.410013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.410297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.410480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.410508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.410673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.410838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.410864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.411070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.411229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.411255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.411412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.411607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.411639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.411815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.412309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.412721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.412919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.413082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.413302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.413329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.413558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.413709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.413735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.413898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.414325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.414747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.414948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.415143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.415360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.415405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.415552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.415741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.415767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.415944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.416151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.416196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.416463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.416662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.416688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.416858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.417266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.417666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.417888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.418093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.418303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.418348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.418533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.418722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.418751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.418957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.419381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.419749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.419939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.420132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.420295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.420320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.420530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.420721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.420747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.420914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.421351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.421732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.421927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.422098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.422284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.422309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.422493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.422678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.604 [2024-07-23 01:51:26.422708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.604 qpair failed and we were unable to recover it. 00:30:13.604 [2024-07-23 01:51:26.422893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.423320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.423735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.423908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.424078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.424278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.424306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.424504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.424667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.424694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.424865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.425382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.425797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.425962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.426135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.426300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.426324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.426549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.426728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.426756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.426932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.427332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.427728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.427896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.428031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.428193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.428222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.428383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.428562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.428590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.428795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.429385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.429805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.429972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.430163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.430355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.430400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.430579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.430813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.430839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.431037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.431297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.431341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.431563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.431787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.431813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.605 qpair failed and we were unable to recover it. 00:30:13.605 [2024-07-23 01:51:26.431988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.432112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.605 [2024-07-23 01:51:26.432137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.432328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.432516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.432544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.432793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.433379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.433791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.433957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.434120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.434343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.434371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.434557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.434760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.434786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.434985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.435218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.435246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.435431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.435648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.435689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.435855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.436257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.436635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.436816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.437007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.437216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.437244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.437450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.437640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.437682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.437843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.438119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.438174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.438467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.438658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.438686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.438872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.439196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.439600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.439829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.440017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.440232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.440260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.440439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.440625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.440669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.440863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.441182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.441237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.441601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.441801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.441825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.441971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.442350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.442736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.442927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.443083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.443269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.443301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.443491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.443680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.443706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.443854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.444290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.444657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.444848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.445077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.445279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.445308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.445516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.445712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.445743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.445925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.446375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.446793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.446995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.447210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.447367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.447408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.447626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.447805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.447833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.448019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.448279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.448330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.448522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.448714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.606 [2024-07-23 01:51:26.448741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.606 qpair failed and we were unable to recover it. 00:30:13.606 [2024-07-23 01:51:26.448951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.449335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.449717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.449882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.450071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.450287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.450336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.450529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.450712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.450741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.450921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.451354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.451755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.451991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.452172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.452401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.452447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.452667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.452829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.452854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.453054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.453260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.453288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.453464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.453675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.453717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.453880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.454312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.454659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.454874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.455071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.455288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.455334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.455500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.455690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.455719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.455929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.456135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.456163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.456354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.456557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.456585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.456770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.456990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.457022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.457216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.457440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.457491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.457661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.457849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.457874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.458079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.458289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.458316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.458481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.458678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.458704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.458850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.458999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.459024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.459187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.459354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.459382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.459566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.459777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.459806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.460015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.460265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.460311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.460488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.460671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.460697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.460843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.461304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.461638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.461822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.461964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.462369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.462802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.462974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.463155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.463330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.463358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.463576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.463793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.463822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.464032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.464244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.464289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.464520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.464680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.464708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.607 [2024-07-23 01:51:26.464851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.464992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.607 [2024-07-23 01:51:26.465017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.607 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.465210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.465496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.465546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.465739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.465914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.465943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.466164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.466441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.466492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.466668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.466839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.466868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.467057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.467282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.467333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.467495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.467662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.467688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.467876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.468207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.468609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.468891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.469063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.469219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.469244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.469439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.469661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.469687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.469821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.470164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.470216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.470449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.470670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.470696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.470861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.471050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.471078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.471381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.471622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.471665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.471803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.471989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.472017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.472176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.472392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.472439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.472629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.472805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.472830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.472988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.473167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.473214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.473445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.473641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.473670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.473853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.474233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.474698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.474861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.475122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.475302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.475331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.475518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.475681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.475706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.475912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.476088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.476116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.476461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.476674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.476700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.476875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.477247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.477667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.477843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.477997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.478151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.478177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.478376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.478558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.478585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.478801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.478991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.479016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.479152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.479286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.479311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.479509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.479708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.479734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.479893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.480092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.480117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.480300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.480485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.480513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.480729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.480944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.481013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.608 qpair failed and we were unable to recover it. 00:30:13.608 [2024-07-23 01:51:26.481304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.608 [2024-07-23 01:51:26.481491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.481521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.481743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.481911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.481936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.482125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.482412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.482475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.482711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.482893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.482921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.483142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.483347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.483404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.483563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.483744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.483770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.483957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.484369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.484751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.484981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.485210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.485423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.485454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.485679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.485844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.485870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.486088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.486274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.486313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.486501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.486720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.486756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.486941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.487193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.487253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.487470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.487636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.487661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.487881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.488264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.488755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.488946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.489133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.489335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.489382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.489587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.489759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.489785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.489951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.490144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.490188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.490411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.490592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.490636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.490847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.491407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.491781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.491966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.492150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.492337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.492364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.492596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.492764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.492789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.492936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.493290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.493682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.493868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.494060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.494234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.494262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.494443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.494627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.494670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.495229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.495767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.495984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.496148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.496307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.496332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.609 qpair failed and we were unable to recover it. 00:30:13.609 [2024-07-23 01:51:26.496517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.609 [2024-07-23 01:51:26.496698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.496724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.496917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.497183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.497209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.497427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.497621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.497647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.497793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.497998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.498025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.498235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.498517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.498547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.498759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.498924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.498949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.499128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.499281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.499310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.499499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.499668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.499712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.499922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.500071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.500103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.500455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.500684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.500710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.500872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.501308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.501726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.501974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.502247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.502429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.502457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.502694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.502857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.502883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.503042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.503485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.503814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.503974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.504154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.504303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.504331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.504524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.504781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.504833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.505055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.505302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.505362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.505568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.505793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.505823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.506014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.506219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.506265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.506463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.506674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.506700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.506872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.507204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.507618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.507830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.508019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.508228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.508255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.508433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.508571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.508597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.508766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.509296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.509684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.509899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.510104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.510312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.510339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.510548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.510743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.510769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.510917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.511360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.511741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.511982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.512249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.512494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.512546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.610 qpair failed and we were unable to recover it. 00:30:13.610 [2024-07-23 01:51:26.512739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.610 [2024-07-23 01:51:26.512918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.512947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.513154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.513362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.513387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.513620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.513797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.513823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.514028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.514195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.514237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.514423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.514574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.514602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.514816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.515002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.515030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.515219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.515561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.515610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.515845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.516236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.516638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.516857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.517061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.517335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.517380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.517595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.517763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.517805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.517998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.518196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.518240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.518425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.518583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.518608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.518815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.518982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.519007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.519140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.519352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.519404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.519619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.519771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.519799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.519983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.520257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.520304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.520519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.520709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.520737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.520923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.521126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.521154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.521308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.521487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.521515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.521703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.521939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.522001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.522211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.522360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.522390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.522573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.522754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.522782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.523000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.523171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.523195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.523388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.523595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.523629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.523816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.524345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.524725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.524927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.525099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.525317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.525344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.525509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.525670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.525696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.525877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.526280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.526665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.526903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.527081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.527213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.527258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.527409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.527581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.527608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.527823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.528036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.528105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.528321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.528537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.528565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.528767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.528961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.529014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.529332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.529580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.611 [2024-07-23 01:51:26.529608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.611 qpair failed and we were unable to recover it. 00:30:13.611 [2024-07-23 01:51:26.529787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.529956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.529981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.530199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.530379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.530406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.530585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.530732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.530757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.530966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.531400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.531809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.531983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.532190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.532401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.532429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.532611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.532799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.532827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.533008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.533224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.533250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3907767 Killed "${NVMF_APP[@]}" "$@" 00:30:13.612 [2024-07-23 01:51:26.533442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.533624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.533655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 01:51:26 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:13.612 01:51:26 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:13.612 [2024-07-23 01:51:26.533846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 01:51:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:13.612 [2024-07-23 01:51:26.534014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.534039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 01:51:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:13.612 01:51:26 -- common/autotest_common.sh@10 -- # set +x 00:30:13.612 [2024-07-23 01:51:26.534390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.535464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.535513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.535690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.535896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.535930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.536124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.536352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.536409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.536623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.536795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.536820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.537001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.537288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.537316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 01:51:26 -- nvmf/common.sh@469 -- # nvmfpid=3908465 00:30:13.612 [2024-07-23 01:51:26.537501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 01:51:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:13.612 01:51:26 -- nvmf/common.sh@470 -- # waitforlisten 3908465 00:30:13.612 [2024-07-23 01:51:26.537696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.537723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 01:51:26 -- common/autotest_common.sh@819 -- # '[' -z 3908465 ']' 00:30:13.612 [2024-07-23 01:51:26.537885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 01:51:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:13.612 [2024-07-23 01:51:26.538068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 01:51:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:13.612 [2024-07-23 01:51:26.538094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 01:51:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:13.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:13.612 [2024-07-23 01:51:26.538262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 01:51:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:13.612 01:51:26 -- common/autotest_common.sh@10 -- # set +x 00:30:13.612 [2024-07-23 01:51:26.538450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.538478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.538689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.538896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.538926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.539110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.539261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.539291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.539502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.539682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.539714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.539882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.540297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.540687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.540881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.541197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.541446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.541474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.541659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.541840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.541869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.542036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.542408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.542770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.542976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.543228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.543536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.543597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.612 qpair failed and we were unable to recover it. 00:30:13.612 [2024-07-23 01:51:26.543780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.612 [2024-07-23 01:51:26.543943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.543969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.544176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.544336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.544361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.544494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.544658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.544685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.544828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.545200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.545603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.545824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.546052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.546378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.546429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.546610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.546787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.546814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.547025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.547237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.547262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.547436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.547619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.547648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.547799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.547980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.548008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.548230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.548566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.548628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.548827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.549309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.549647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.549860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.550021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.550407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.550737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.550927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.551117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.551444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.551497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.551678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.551874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.551899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.552086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.552303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.552328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.552490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.552691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.552731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.552919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.553292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.553670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.553855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.554054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.554241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.554267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.554456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.554666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.554695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.554866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.555248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.555572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.555750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.555897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.556117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.556189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.556407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.556601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.556633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.556829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.557334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.557724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.557913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.558093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.558345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.558409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.558569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.558782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.558811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.559033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.559222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.559247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.559404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.559541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.559566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.613 qpair failed and we were unable to recover it. 00:30:13.613 [2024-07-23 01:51:26.559709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.613 [2024-07-23 01:51:26.559848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.559874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.560017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.560234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.560262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.560447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.560659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.560687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.560911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.561361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.561754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.561933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.562149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.562338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.562366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.562574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.562761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.562790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.562979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.563363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.563716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.563956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.564170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.564518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.564565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.564764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.564938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.564966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.565153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.565334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.565362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.565580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.565780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.565810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.566025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.566355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.566413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.566571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.566756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.566786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.566997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.567205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.567234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.567448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.567609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.567642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.567822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.568414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.568729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.568944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.569160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.569480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.569536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.569754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.569903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.569930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.570149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.570465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.570813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.570998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.571166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.571479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.571541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.571728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.571907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.571936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.572116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.572283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.572308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.572472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.572702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.572732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.572921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.573299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.573702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.573927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.574122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.574416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.574480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.574668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.574821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.574849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.575050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.575413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.575788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.575963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.614 [2024-07-23 01:51:26.576249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.576407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.614 [2024-07-23 01:51:26.576435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.614 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.576633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.576815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.576845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.576997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.577235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.577288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.577446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.577638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.577683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.577879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.578134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.578161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.578355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.578561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.578589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.578788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.579209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.579602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.579829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.580018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.580378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.580658] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:13.615 [2024-07-23 01:51:26.580731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580734] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:13.615 [2024-07-23 01:51:26.580899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.580923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.581091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.581484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.581536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.581699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.581900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.581929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.582188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.582499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.582551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.582744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.582921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.582950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.583217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.583464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.583526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.583717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.583913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.583938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.584132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.584398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.584426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.584633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.584816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.584845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.585204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.585549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.585601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.585787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.586021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.586069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.586281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.586665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.586696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.586889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.587064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.587092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.587350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.587643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.587672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.587928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.588284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.588333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.588551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.588759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.588789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.589004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.589238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.589290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.589520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.589690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.589717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.589875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.590088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.590114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.590370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.590597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.590678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.590830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.590976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.615 [2024-07-23 01:51:26.591002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.615 qpair failed and we were unable to recover it. 00:30:13.615 [2024-07-23 01:51:26.591142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.591341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.591368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.591584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.591746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.591776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.591990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.592216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.592281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.592440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.592653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.592682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.592887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.593084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.593144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.593299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.593473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.593501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.593700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.594341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.594753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.594925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.595133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.595311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.595338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.595478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.595658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.595687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.595881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.596055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.596083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.596360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.596576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.596601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.596825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.597171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.597227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.597441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.597622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.597651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.597843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.598045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.598073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.598393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.598654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.598683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.598871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.599283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.599700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.599889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.600186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.600431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.600459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.600638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.600850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.600876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.601088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.601402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.601458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.601685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.601838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.601867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.602040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.602407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.602457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.602677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.602862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.602887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.603047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.603227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.603255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.603438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.603628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.603671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.603853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.604211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.604639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.604860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.605046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.605253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.605281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.605450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.605600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.605637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.605791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.606225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.606632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.606803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.606998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.607400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.607756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.607930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.616 qpair failed and we were unable to recover it. 00:30:13.616 [2024-07-23 01:51:26.608119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.616 [2024-07-23 01:51:26.608309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.608335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.608491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.608711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.608740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.608936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.609269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.609662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.609878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.610079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.610248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.610274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.610435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.610569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.610594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.610780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.610987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.611015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.611214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.611385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.611410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.611619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.611816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.611842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.611995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.612181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.612210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.612423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.612608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.612653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.612842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.613024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.613052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.613234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.613546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.613632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.613817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.613980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.614013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.614209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.614352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.614378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.614548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.614773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.614802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.615063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.615432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.615481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.615684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.615850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.615876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.616070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.616335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.616381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.616592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.616795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.616823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.617010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.617370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.617773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.617943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.618133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.618293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.618321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.618507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.618687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.618715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.618870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.619027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.619054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.619235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.619441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.619469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.619648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.617 [2024-07-23 01:51:26.619784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.619810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.619992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.620270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.620293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.620467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.620630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.620657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.620814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.621347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.621720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.621918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.622134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.622468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.622524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.622752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.622941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.622966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.623108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.623275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.623300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.623439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.623684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.623709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.617 qpair failed and we were unable to recover it. 00:30:13.617 [2024-07-23 01:51:26.623858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.624053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.617 [2024-07-23 01:51:26.624078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.624239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.624407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.624447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.624608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.624778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.624803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.624947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.625329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.625671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.625954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.626246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.626477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.626502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.626692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.626862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.626888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.627078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.627212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.627237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.627491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.627662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.627688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.627844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.628240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.628631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.628822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.628987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.629342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.629736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.629912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.630152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.630316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.630341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.630478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.630649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.630675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.630816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.631211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.631587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.631754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.631892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.632281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.632635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.632862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.632996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.633419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.633745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.633912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.634155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.634330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.634359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.634523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.634715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.634741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.634875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.635311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.635790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.635986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.636174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.636332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.636357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.636523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.636682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.636709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.636877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.637220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.637597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.637772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.637957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.638427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.638774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.638935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.618 qpair failed and we were unable to recover it. 00:30:13.618 [2024-07-23 01:51:26.639098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.618 [2024-07-23 01:51:26.639233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.639261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.639430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.639625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.639652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.639821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.639962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.639987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.640126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.640319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.640345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.640506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.640646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.640672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.640845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.641229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.641578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.641863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.642033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.642197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.642222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.642409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.642606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.642638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.642799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.643203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.643713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.643936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.644107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.644272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.644297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.644491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.644657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.644683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.644878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.645252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.645582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.645779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.645940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.646324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.646678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.646867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.647032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.647351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.647741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.647918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.648084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.648250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.648275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.648444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.648585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.648622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.648781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.648979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.649004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.649147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.649388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.649413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.649586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.649724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.649754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.649948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.650321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.650756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.650951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.651119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.651282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.651308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.651466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.651661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.651687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.651822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.651986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.652012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.619 qpair failed and we were unable to recover it. 00:30:13.619 [2024-07-23 01:51:26.652156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.619 [2024-07-23 01:51:26.652324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.652350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.652521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.652658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.652684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.652877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.653237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.653637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.653832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.654025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.654408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:13.620 [2024-07-23 01:51:26.654584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.654763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.654978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.655127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.655315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.655340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.655478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.655669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.655695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.655857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.655992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.656017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.656176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.656341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.656368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.656538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.656710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.656736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.656929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.657272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.657624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.657818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.657961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.658283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.658648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.658806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.658994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.659235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.659261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.659455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.659623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.659648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.659845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.660242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.660599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.660802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.660967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.661316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.661749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.661965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.662125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.662280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.662306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.662497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.662664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.662692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.662837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.662999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.663025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.663188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.663355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.663381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.663553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.663736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.663763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.663931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.664306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.664669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.664941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.665130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.665319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.665345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.665534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.665669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.665695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.665889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.666049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.666074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.666237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.666364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.666390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.620 qpair failed and we were unable to recover it. 00:30:13.620 [2024-07-23 01:51:26.666564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.620 [2024-07-23 01:51:26.666721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.666747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.666911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.667264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.667650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.667864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.668034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.668363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.668727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.668997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.669175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.669340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.669366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.669537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.669669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.669696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.669894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.670276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.670692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.670898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.671046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.671404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.671734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.671903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.672041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.672368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.672731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.672925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.673124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.673292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.673318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.673488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.673660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.673687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.673848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.673975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.674001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.674191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.674350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.674376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.674516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.674682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.674709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.674853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.675229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.675548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.675706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.675852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.676212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.676516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.676706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.676868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.677179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.677501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.677860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.677999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.678024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.678171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.678307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.678333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.678575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.678711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.678737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.678928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.679372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.679705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.679901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.680045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.680237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.680262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.621 [2024-07-23 01:51:26.680451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.680608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.621 [2024-07-23 01:51:26.680649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.621 qpair failed and we were unable to recover it. 00:30:13.622 [2024-07-23 01:51:26.680789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.622 [2024-07-23 01:51:26.680931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.622 [2024-07-23 01:51:26.680957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.622 qpair failed and we were unable to recover it. 00:30:13.622 [2024-07-23 01:51:26.681126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.622 [2024-07-23 01:51:26.681265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.681291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.681476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.681647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.681673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.681861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.682192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.682654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.682845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.683026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.683381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.683674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.683864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.684058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.684384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.684789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.684958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.685128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.685488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.685782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.685969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.686137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.686273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.686299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.896 qpair failed and we were unable to recover it. 00:30:13.896 [2024-07-23 01:51:26.686466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.896 [2024-07-23 01:51:26.686608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.686641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.686806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.686942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.686968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.687139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.687295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.687335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.687505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.687664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.687691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.687832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.687999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.688025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.688188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.688381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.688407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.688576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.688758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.688812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.688985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.689318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.689706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.689927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.690092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.690258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.690288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.690450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.690623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.690649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.690839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.690973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.691000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.691164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.691306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.691332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.691462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.691656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.691682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.691846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.692277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.692655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.692854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.692990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.693351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.693678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.693882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.694056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.694412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.694805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.694998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.695191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.695322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.695348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.695539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.695709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.695736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.695905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.696097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.696123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.897 [2024-07-23 01:51:26.696365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.696526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.897 [2024-07-23 01:51:26.696552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.897 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.696718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.696880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.696906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.697066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.697414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.697790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.697958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.698120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.698248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.698274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.698439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.698600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.698632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.698800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.699236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.699603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.699797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.699940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.700269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.700604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.700821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.700965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.701290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.701645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.701834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.702003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.702383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.702774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.702941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.703115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.703249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.703273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.703460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.703622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.703648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.703875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.704306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.704648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.704814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.704971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.705301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.705661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.705829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.705961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.706148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.898 [2024-07-23 01:51:26.706173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.898 qpair failed and we were unable to recover it. 00:30:13.898 [2024-07-23 01:51:26.706308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.706542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.706567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.706737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.706927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.706953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.707123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.707289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.707314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.707479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.707668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.707694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.707865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.708252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.708587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.708783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.709036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.709209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.709235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.709425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.709589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.709620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.709865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.710257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.710610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.710816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.710979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.711145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.711170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.711361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.711528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.711554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.711753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.711995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.712020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.712211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.712400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.712426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.712594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.712760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.712787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.712972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.713326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.713645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.713867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.714031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.714416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.714766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.714982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.715168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.715359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.715385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.715521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.715686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.715713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.715959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.716105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.716133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.716273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.716474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.716500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.899 qpair failed and we were unable to recover it. 00:30:13.899 [2024-07-23 01:51:26.716698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.899 [2024-07-23 01:51:26.716856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.716883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.717026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.717377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.717791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.717955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.718118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.718446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.718799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.718993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.719154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.719314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.719339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.719500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.719699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.719726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.719886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.720220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.720547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.720745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.720936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.721269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.721637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.721834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.721973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.722302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.722663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.722825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.722988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.723338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.723722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.723888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.724021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.724186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.724211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.724402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.724567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.724594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.724815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.724983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.725011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.725201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.725386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.725412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.725600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.725743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.725770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.725935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.726128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.726153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.900 qpair failed and we were unable to recover it. 00:30:13.900 [2024-07-23 01:51:26.726313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.900 [2024-07-23 01:51:26.726510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.726536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.726698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.726868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.726893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.727062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.727255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.727280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.727470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.727598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.727629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.727803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.727974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.728000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.728161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.728294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.728320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.728490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.728651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.728679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.728850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.729203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.729558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.729722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.729885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.730253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.730620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.730783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.730975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.731141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.731167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.731346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.731589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.731621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.731868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.732247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.732708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.732874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.733066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.733257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.733283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.733452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.733636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.733662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.733828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.733990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.734016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.734183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.734356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.734383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.901 qpair failed and we were unable to recover it. 00:30:13.901 [2024-07-23 01:51:26.734553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.734686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.901 [2024-07-23 01:51:26.734713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.734873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.735222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.735557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.735753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.735911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.736269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.736606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.736784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.736950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.737275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.737641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.737831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.738020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.738402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.738788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.738985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.739180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.739338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.739364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.739525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.739720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.739747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.739886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.740292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.740659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.740884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.741074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.741407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.741745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.741938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.742070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.742233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.742259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.742418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.742594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.742629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.742801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.742990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.743015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.743179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.743309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.743334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.743509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.743677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.743705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.743850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.744043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.744068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.902 [2024-07-23 01:51:26.744206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.744343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.902 [2024-07-23 01:51:26.744369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.902 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.744496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.744635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.744661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.744694] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:13.903 [2024-07-23 01:51:26.744824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.744844] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:13.903 [2024-07-23 01:51:26.744868] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:13.903 [2024-07-23 01:51:26.744881] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:13.903 [2024-07-23 01:51:26.744951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.744977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.744938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:13.903 [2024-07-23 01:51:26.744995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:13.903 [2024-07-23 01:51:26.744998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:13.903 [2024-07-23 01:51:26.745116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.744968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:13.903 [2024-07-23 01:51:26.745283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.745312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.745491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.745627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.745654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.745808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.745985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.746011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.746173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.746362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.746387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.746529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.746749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.746775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.746967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.747286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.747635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.747850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.748074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.748379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.748694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.748885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.749031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.749414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.749762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.749929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.750075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.750393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.750745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.750904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.751099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.751269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.751295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.751522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.751674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.751700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.751833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.751986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.752140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.752507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.752859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.752987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.903 [2024-07-23 01:51:26.753013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.903 qpair failed and we were unable to recover it. 00:30:13.903 [2024-07-23 01:51:26.753183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.753315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.753340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.753481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.753645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.753672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.753828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.753984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.754010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.754167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.754317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.754343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.754476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.754634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.754659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.754836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.755281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.755669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.755828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.756005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.756359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.756677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.756848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.756977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.757347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.757674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.757854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.757990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.758336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.758686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.758893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.759059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.759389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.759700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.759906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.760099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.760231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.760256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.760495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.760688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.760714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.760901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.761219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.761585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.761782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.761958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.762307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.904 [2024-07-23 01:51:26.762645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.904 [2024-07-23 01:51:26.762849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.904 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.763010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.763354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.763686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.763869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.764023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.764469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.764830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.764995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.765145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.765290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.765316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.765506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.765637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.765663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.765829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.765993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.766018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.766178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.766307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.766332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.766506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.766704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.766730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.766903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.767206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.767521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.767725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.767859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.768217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.768539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.768727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.768859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.769193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.769526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.769773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.769942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.770246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.770580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.770781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.770915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.771210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.771534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.771705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.771895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.772053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.772078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.905 qpair failed and we were unable to recover it. 00:30:13.905 [2024-07-23 01:51:26.772244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.772379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.905 [2024-07-23 01:51:26.772404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.772572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.772746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.772773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.772921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.773249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.773606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.773810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.773946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.774292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.774602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.774800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.774952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.775299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.775699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.775896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.776060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.776420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.776771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.776961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.777120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.777469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.777780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.777967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.778142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.778303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.778328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.778491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.778658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.778684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.778843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.779183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.779505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.779790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.779970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.780119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.780283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.780308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.906 qpair failed and we were unable to recover it. 00:30:13.906 [2024-07-23 01:51:26.780458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.906 [2024-07-23 01:51:26.780647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.780674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.780821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.780952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.780977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.781199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.781371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.781396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.781533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.781673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.781698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.781873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.782210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.782548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.782740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.782879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.783182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.783496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.783692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.783888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.784210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.784624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.784779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.784983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.785277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.785621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.785805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.785933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.786288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.786581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.786757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.786906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.787229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.787596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.787859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.788030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.788349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.788698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.788890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.789030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.789364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.789691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.907 [2024-07-23 01:51:26.789844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.907 qpair failed and we were unable to recover it. 00:30:13.907 [2024-07-23 01:51:26.790037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.790377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.790760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.790938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.791102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.791375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.791405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.791536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.791704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.791731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.791862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.792192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.792531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.792695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.792872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.793228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.793539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.793726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.793903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.794260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.794624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.794811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.794943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.795278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.795600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.795797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.795927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.796263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.796557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.796753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.796891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.797222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.797582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.797756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.797895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.798223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.798538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.798769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.798911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.799052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.799077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.908 [2024-07-23 01:51:26.799206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.799377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.908 [2024-07-23 01:51:26.799403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.908 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.799533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.799680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.799707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.799848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.800227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.800604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.800778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.800928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.801353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.801679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.801842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.802005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.802323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.802663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.802833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.802998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.803314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.803650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.803802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.803962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.804287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.804750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.804945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.805122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.805453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.805814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.805987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.806145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.806381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.806406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.806575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.806743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.806768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.806930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.807237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.807673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.807868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.808044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.808406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.808766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.808945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.909 qpair failed and we were unable to recover it. 00:30:13.909 [2024-07-23 01:51:26.809117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.809279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.909 [2024-07-23 01:51:26.809304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.809439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.809571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.809596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.809816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.810266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.810573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.810754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.810926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.811227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.811560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.811734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.811942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.812260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.812586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.812792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.812930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.813246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.813576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.813772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.813914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.814205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.814500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.814830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.814977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.815194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.815481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.815819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.815980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.816135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.816279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.816304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.816469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.816653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.816679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.910 qpair failed and we were unable to recover it. 00:30:13.910 [2024-07-23 01:51:26.816844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.816976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.910 [2024-07-23 01:51:26.817001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.817149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.817284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.817309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.817500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.817647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.817674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.817810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.818262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.818587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.818754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.818904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.819323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.819622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.819835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.820003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.820369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.820688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.820851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.821022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.821374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.821733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.821900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.822102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.822445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.822807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.822970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.823135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.823289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.823314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.823481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.823636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.823662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.823824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.824211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.824571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.824744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.824909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.825274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.825661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.825832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.825961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.826122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.826148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.826316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.826455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.826485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.911 qpair failed and we were unable to recover it. 00:30:13.911 [2024-07-23 01:51:26.826653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.911 [2024-07-23 01:51:26.826796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.826824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.826957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.827289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.827637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.827826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.827964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.828254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.828558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.828761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.828909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.829229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.829590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.829789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.829927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.830247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.830588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.830787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.830952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.831267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.831715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.831878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.832022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.832348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.832685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.832835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.833001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.833359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.833720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.833879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.834042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.834428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.834755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.834942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.835075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.835408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.835767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.912 [2024-07-23 01:51:26.835953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.912 qpair failed and we were unable to recover it. 00:30:13.912 [2024-07-23 01:51:26.836095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.836256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.836281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.836443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.836606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.836638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.836806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.837178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.837508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.837843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.837983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.838008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.838141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.838330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.838356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.838492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.838699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.838725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.838888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.839232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.839574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.839802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.839962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.840265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.840634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.840804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.840972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.841255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.841573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.841746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.841900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.842201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.842547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.842734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.842866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.843207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.843608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.843782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.843948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.844312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.844689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.844886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.845013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.845188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.845213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.913 [2024-07-23 01:51:26.845348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.845534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.913 [2024-07-23 01:51:26.845559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.913 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.845732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.845869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.845895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.846047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.846392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.846724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.846898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.847035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.847386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.847760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.847953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.848087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.848412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.848745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.848912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.849044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.849327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.849691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.849864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.850044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.850334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.850652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.850803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.850969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.851301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.851629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.851797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.851956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.852250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.852640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.852822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.852964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.853317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.853701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.853863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.854002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.854363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.914 qpair failed and we were unable to recover it. 00:30:13.914 [2024-07-23 01:51:26.854710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.914 [2024-07-23 01:51:26.854876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.855080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.855411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.855756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.855957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.856124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.856271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.856297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.856498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.856666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.856692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.856841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.856983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.857199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.857513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.857823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.857974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.858177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.858467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.858788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.858951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.859083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.859377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.859717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.859909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.860072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.860415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.860773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.860964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.861095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.861260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.861285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.861447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.861644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.861669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.861837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.861976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.862130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.862456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.915 qpair failed and we were unable to recover it. 00:30:13.915 [2024-07-23 01:51:26.862764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.915 [2024-07-23 01:51:26.862922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.862947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.863073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.863414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.863747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.863912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.864042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.864341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.864675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.864836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.864985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.865311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.865626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.865787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.865928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.866241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.866577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.866754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.866890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.867258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.867571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.867796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.867950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.868314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.868632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.868834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.868981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.869320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.869711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.869876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.870021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.870334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.870660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.870826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.871011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.871303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.871646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.916 [2024-07-23 01:51:26.871806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.916 qpair failed and we were unable to recover it. 00:30:13.916 [2024-07-23 01:51:26.871971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.872322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.872650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.872827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.873023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.873372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.873701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.873864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.874005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.874327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.874634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.874800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.874970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.875305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.875611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.875784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.875951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.876246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.876594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.876771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.876924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.877285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.877631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.877795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.877936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.878256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.878585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.878789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.878926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.879244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.879560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.879733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.879873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.880166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.880523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.917 [2024-07-23 01:51:26.880863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.880988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.917 [2024-07-23 01:51:26.881013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.917 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.881150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.881281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.881306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.881476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.881655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.881680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.881821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.881987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.882140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.882495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.882817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.882991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.883152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.883327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.883352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.883487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.883626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.883652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.883840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.884184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.884528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.884705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.884885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.885277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.885576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.885753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.885880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.886315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.886663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.886847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.886995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.887309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.887649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.887800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.887969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.888340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.888623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.888812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.888965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.889273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.889606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.889794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.889954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.890098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.890124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.918 [2024-07-23 01:51:26.890299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.890460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.918 [2024-07-23 01:51:26.890485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.918 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.890663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.890857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.890882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.891027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.891342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.891651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.891825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.891990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.892301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.892619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.892779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.892970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.893273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.893577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.893753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.893910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.894256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.894609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.894784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.894950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.895271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.895601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.895797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.895958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.896300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.896643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.896838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.896986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.897332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.897627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.897802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.897937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.898253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.898531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.898723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.898902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.899077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.899102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.899289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.899446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.899471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.919 qpair failed and we were unable to recover it. 00:30:13.919 [2024-07-23 01:51:26.899637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.919 [2024-07-23 01:51:26.899767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.899792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.899925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.900236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.900594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.900759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.900893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.901295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.901602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.901771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.901927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.902244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.902636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.902817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.902957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.903274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.903607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.903812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.903970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.904281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.904571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.904762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.904931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.905279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.905580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.905782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.905935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.906252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.906599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.906771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.906938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.907115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.907140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.907301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.907464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.907491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.920 qpair failed and we were unable to recover it. 00:30:13.920 [2024-07-23 01:51:26.907627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.920 [2024-07-23 01:51:26.907785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.907810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.907989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.908280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.908609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.908783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.908953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.909318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.909650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.909837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.910019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.910342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.910691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.910858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.911016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.911349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.911668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.911854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.911992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.912311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.912684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.912854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.913033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.913360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.913688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.913892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.914018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.914333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.914725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.914879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.915042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.915374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.915685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.915884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.916044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.916394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.916716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.921 [2024-07-23 01:51:26.916886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.921 qpair failed and we were unable to recover it. 00:30:13.921 [2024-07-23 01:51:26.917054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.917369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.917721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.917937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.918105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.918453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.918811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.918965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.919157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.919489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.919789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.919948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.920081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.920418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.920746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.920912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.921080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.921376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.921736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.921889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.922048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.922401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.922713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.922921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.923085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.923426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.923752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.923916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.924083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.924422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.924725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.924895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.925046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.925372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.925723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.925880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.922 [2024-07-23 01:51:26.926022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.926148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.922 [2024-07-23 01:51:26.926174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.922 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.926338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.926514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.926539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.926706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.926843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.926868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.927042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.927402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.927690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.927882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.928040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.928389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.928738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.928909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.929042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.929392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.929727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.929894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.930064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.930401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.930733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.930916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.931077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.931447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.931802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.931973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.932134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.932482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.932795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.932959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.933119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.933294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.933319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.933504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.933640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.933666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.933836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.933997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.934157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.934474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.934853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.934983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.935008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.935158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.935287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.935313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.923 qpair failed and we were unable to recover it. 00:30:13.923 [2024-07-23 01:51:26.935462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.935624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.923 [2024-07-23 01:51:26.935649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.935812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.935950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.935979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.936137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.936447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.936822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.936987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.937155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.937315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.937340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.937533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.937671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.937697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.937865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.937993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.938187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.938506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.938861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.938997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.939153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.939490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.939824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.939991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.940155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.940439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.940725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.940891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.941053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.941360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.941705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.941902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.942040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.942387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.942721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.942873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.943032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.943352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.943683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.943841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.944012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.944193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.944218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.944384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.944513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.944538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-07-23 01:51:26.944681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.924 [2024-07-23 01:51:26.944819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.944843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.945002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.945345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.945658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.945819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.945995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.946339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.946649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.946799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.946927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.947292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.947660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.947812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.947952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.948272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.948553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.948752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.948916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.949269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.949610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.949776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.949916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.950287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.950619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.950786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.950945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.951265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.951628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.951810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.951956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.952085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.952110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.952250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.952380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.925 [2024-07-23 01:51:26.952404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-07-23 01:51:26.952576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.952731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.952760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.952924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.953245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.953630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.953845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.953993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.954329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.954626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.954789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.954921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.955250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.955576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.955771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.955958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.956334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.956641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.956833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.956996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.957319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.957642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.957799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.957937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.958261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.958650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.958807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.958942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.959249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.959590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.959767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.959930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.960218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.960551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.960762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.960895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.961200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-07-23 01:51:26.961541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.926 [2024-07-23 01:51:26.961709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.961843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.962190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.962504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.962838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.962997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.963023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.963153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.963310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.963334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.963533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.963692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.963717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.963880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.964241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.964522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.964723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.964856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.965190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.965504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.965808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.965962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.966152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.966444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.966774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.966932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.967072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.967398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.967707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.967870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.968055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.968386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.968732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.968923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.969057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.969349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.969662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.969850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.970022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.970366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.927 qpair failed and we were unable to recover it. 00:30:13.927 [2024-07-23 01:51:26.970725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.927 [2024-07-23 01:51:26.970884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.971055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.971381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.971708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.971878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.972020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.972332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.972690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.972877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.973044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.973382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.973714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.973874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.974021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.974342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.974680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.974836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.974977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.975277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.975609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.975772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.975911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.976222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.976508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.976700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.976831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.977172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.977500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.977828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.977978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.978150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.978507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.978833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.978976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:13.928 [2024-07-23 01:51:26.979000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:13.928 qpair failed and we were unable to recover it. 00:30:13.928 [2024-07-23 01:51:26.979138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.203 [2024-07-23 01:51:26.979276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.979300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.979454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.979598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.979639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.979798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.979946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.979970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.980108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.980416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.980743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.980900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.981065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.981408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.981718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.981892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.982038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.982361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.982651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.982841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.982983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.983330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.983623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.983778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.983926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.984227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.984542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.984705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.984848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.985224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.985540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.985720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.985876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.986223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.986541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.986741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.986905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.987222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.987504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.204 qpair failed and we were unable to recover it. 00:30:14.204 [2024-07-23 01:51:26.987836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.204 [2024-07-23 01:51:26.987984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.988163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.988509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.988840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.988982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.989151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.989476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.989766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.989976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.990110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.990246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.990272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.990400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.990535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.990559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.990816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.990981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.991148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.991464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.991820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.991999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.992143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.992395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.992419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.992568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.992704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.992729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.992895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.993237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.993558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.993717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.993893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.994187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.994490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.994840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.994997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.995021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.995212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.995377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.995401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.995558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.995689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.995714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.995891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.996240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.996578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.996781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.205 [2024-07-23 01:51:26.997023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.997181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.205 [2024-07-23 01:51:26.997205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.205 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.997398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.997571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.997595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.997735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.997882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.997907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.998070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.998358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.998763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.998957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.999149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.999495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:26.999821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:26.999990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.000158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.000312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.000336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.000519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.000670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.000695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.000938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.001299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.001636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.001805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.001940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.002370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.002672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.002857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.002988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.003281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.003632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.003801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.003934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.004266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.004622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.004837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.004986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.005266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.005608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.005777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.005923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.006054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.006078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.006256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.006385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.006408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.206 qpair failed and we were unable to recover it. 00:30:14.206 [2024-07-23 01:51:27.006570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.206 [2024-07-23 01:51:27.006813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.006840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.006989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.007364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.007647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.007818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.007953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.008283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.008657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.008813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.008969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.009260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.009620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.009831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.009980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.010306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.010632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.010819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.010968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.011310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.011637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.011795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.011956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.012288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.012598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.012786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.012978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.013264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.013629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.013827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.013988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.014306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.014611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.014777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.014925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.015250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.207 qpair failed and we were unable to recover it. 00:30:14.207 [2024-07-23 01:51:27.015610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.207 [2024-07-23 01:51:27.015791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.015969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.016330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.016695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.016850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.017007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.017338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.017660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.017824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.018010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.018341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.018639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.018821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.018998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.019325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.019639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.019806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.019971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.020293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.020580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.020788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.020949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.021264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.021592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.021765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.021917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.022226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.022550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.022710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.022860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.023044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.208 [2024-07-23 01:51:27.023069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.208 qpair failed and we were unable to recover it. 00:30:14.208 [2024-07-23 01:51:27.023202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.023330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.023354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.023494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.023631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.023659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.023794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.024190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.024534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.024732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.024876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.025218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.025543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.025729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.025881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.026210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.026602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.026774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.026930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.027236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.027583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.027747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.027893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.028237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.028591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.028785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.028935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.029228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.029576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.029773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.029910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.030202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.030535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.030846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.030998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.031131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.031450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.031752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.031913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.209 qpair failed and we were unable to recover it. 00:30:14.209 [2024-07-23 01:51:27.032076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.209 [2024-07-23 01:51:27.032228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.032252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.032413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.032547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.032571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.032760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.032908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.032932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.033087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.033397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.033753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.033911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.034057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.034347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.034654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.034822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.034987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.035310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.035606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.035819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.035962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.036268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.036620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.036824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.036992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.037343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.037701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.037892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.038024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.038387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.038780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.038938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.039074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.039433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.039740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.039897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.040037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.040372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.040725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.040895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.041036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.041212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.210 [2024-07-23 01:51:27.041236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.210 qpair failed and we were unable to recover it. 00:30:14.210 [2024-07-23 01:51:27.041364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.041495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.041519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.041687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.041835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.041859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.042025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.042325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.042641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.042832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.042970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.043296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.043637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.043813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.043945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.044278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.044596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.044758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.044890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.045202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.045503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.045694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.045875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.046164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.046504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.046861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.046995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.047183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.047488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.047821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.047998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.048160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.048455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.048810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.048984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.049124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.049439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.049762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.049935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.211 qpair failed and we were unable to recover it. 00:30:14.211 [2024-07-23 01:51:27.050065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.211 [2024-07-23 01:51:27.050190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.050215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.050346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.050482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.050507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.050652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.050801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.050827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.050978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.051294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.051631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.051795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.051972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.052270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.052642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.052804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.052937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.053288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.053661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.053836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.054012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.054370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.054731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.054893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.055036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.055345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.055668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.055831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.055999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.056284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.056625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.056809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.056986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.057329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.057640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.057838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.057976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.058291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.058645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.058832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.059000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.059133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.212 [2024-07-23 01:51:27.059157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.212 qpair failed and we were unable to recover it. 00:30:14.212 [2024-07-23 01:51:27.059349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.059503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.059528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.059678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.059817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.059842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.059986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.060327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.060639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.060816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.060969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.061321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.061609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.061778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.061905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.062230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.062568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.062719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.062885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.063219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.063519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.063684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.063848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.064209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.064530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.064829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.064988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.065182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.065485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.065799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.065956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.066090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.066231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.066255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.066389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.066514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.066538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.213 qpair failed and we were unable to recover it. 00:30:14.213 [2024-07-23 01:51:27.066689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.213 [2024-07-23 01:51:27.066825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.066850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.067007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.067342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.067658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.067817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.067987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.068309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.068637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.068811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.068951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.069289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.069628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.069787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.069951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.070295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.070609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.070768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.070918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.071248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.071567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.071740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.071917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.072238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.072595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.072784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.072929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.073242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.073627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.073790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.073923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.074237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.074552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.074753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.074914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.075230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.075522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.214 qpair failed and we were unable to recover it. 00:30:14.214 [2024-07-23 01:51:27.075818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.214 [2024-07-23 01:51:27.075957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.075981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.076114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.076431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.076759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.076914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.077056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.077343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.077686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.077854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.077994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.078276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.078608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.078794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.078931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.079251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.079574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.079762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.079893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.080248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.080570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.080762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.080926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.081273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.081586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.081769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.081948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.082288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.082602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.082786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.082952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.083271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.083576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.083766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.083918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.084276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.084630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.215 [2024-07-23 01:51:27.084795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.215 qpair failed and we were unable to recover it. 00:30:14.215 [2024-07-23 01:51:27.084932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.085212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.085502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.085806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.085975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.086124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.086257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.086281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.086423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.086584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.086608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.086866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.086992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.087173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.087474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.087804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.087955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.088091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.088427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.088775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.088959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.089103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.089400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.089786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.089976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.090138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.090491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.090799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.090988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.091118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.091480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.091777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.091980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.092112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.092414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.092784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.092971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.093100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.093268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.093292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.093437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.093571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.093595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.093800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.093975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.216 [2024-07-23 01:51:27.094003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.216 qpair failed and we were unable to recover it. 00:30:14.216 [2024-07-23 01:51:27.094139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.094460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.094765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.094932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.095095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.095435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.095738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.095922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.096082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.096379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.096714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.096885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.097078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.097405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.097727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.097877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.098074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.098399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.098686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.098861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.099030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.099378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.099675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.099830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.099967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.100289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.100646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.100807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.100948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.101282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.101626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.101817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.101992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.102281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.102570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.102746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.217 [2024-07-23 01:51:27.102895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.103026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.217 [2024-07-23 01:51:27.103051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.217 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.103228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.103397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.103421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.103582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.103735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.103760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.103934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.104259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.104623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.104790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.104921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.105247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.105572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.105777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.105913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.106254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.106539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.106703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.106842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.107176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.107519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.107855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.107999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.108223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.108509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.108847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.108996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.109142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.109490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.109809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.109977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.110133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.110298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.110322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.110465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.110630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.110677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.218 qpair failed and we were unable to recover it. 00:30:14.218 [2024-07-23 01:51:27.110871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.111004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.218 [2024-07-23 01:51:27.111029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.111192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.111553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.111835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.111988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.112119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.112419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.112776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.112927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.113081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.113396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.113740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.113913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.114076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.114422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.114730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.114891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.115034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.115397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.115717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.115873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.116021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.116393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.116678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.116861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.117016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.117374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.117696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.117848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.117987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.118300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.118594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.118800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.118960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.119271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.119560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.119749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.219 qpair failed and we were unable to recover it. 00:30:14.219 [2024-07-23 01:51:27.119943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.219 [2024-07-23 01:51:27.120073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.120097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.120254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.120420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.120445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.120588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.120756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.120782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.120912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.121279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.121618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.121794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.121926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.122252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.122601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.122762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.122952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.123249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.123571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.123739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.123888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.124217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.124563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.124746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.124882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.125211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.125544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.125831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.125981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.126147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.126482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.126809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.126990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.127136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.127492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.127776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.127929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.128055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.128340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.128699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.128868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.220 qpair failed and we were unable to recover it. 00:30:14.220 [2024-07-23 01:51:27.129027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.220 [2024-07-23 01:51:27.129177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.129201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.129379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.129548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.129573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.129710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.129851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.129876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.130034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.130331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.130663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.130864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.131005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.131326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.131633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.131824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.131964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.132297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.132602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.132815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.132959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.133343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.133693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.133885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.134022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.134345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.134686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.134898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.135060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.135399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.135726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.135877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.136033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.136358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.136679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.136849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.137011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.137318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.137644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.137803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.137939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.138180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.221 [2024-07-23 01:51:27.138204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.221 qpair failed and we were unable to recover it. 00:30:14.221 [2024-07-23 01:51:27.138396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.138540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.138564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.138712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.138851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.138876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.139064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.139383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.139685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.139873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.140065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.140441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf83610 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.140759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.140948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.141096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.141458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.141769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.141956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.142098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.142433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.142788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.142959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.143110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.143422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.143766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.143953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.144147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.144292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.144321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.144486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.144643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.144668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.144817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.144984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.145009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.145176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.145346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.145370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.145530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.145684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.145710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.145873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.146178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.146545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.146708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.146872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.147197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.222 [2024-07-23 01:51:27.147580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.222 [2024-07-23 01:51:27.147755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.222 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.147902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.148232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.148534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.148825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.148989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.149177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.149501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.149856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.149994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.150180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.150514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.150841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.150998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.151138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.151458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.151784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.151976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.152140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.152452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.152802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.152995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.153187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.153315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.153340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.153493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.153625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.153651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.153812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.153976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.154192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.154529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.154854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.154986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.155012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.155151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.155311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.155336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.223 qpair failed and we were unable to recover it. 00:30:14.223 [2024-07-23 01:51:27.155534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.223 [2024-07-23 01:51:27.155674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.155700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.155841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.156197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.156509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.156806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.156964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.157106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.157468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.157797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.157962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.158117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.158438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.158777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.158967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.159139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.159431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.159759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.159944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.160105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.160422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.160784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.160944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.161092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.161447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.161769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.161922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.162079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.162435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.162764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.162945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.163134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.163430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.163760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.163916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.164079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.164238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.164263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.224 [2024-07-23 01:51:27.164435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.164572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.224 [2024-07-23 01:51:27.164597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.224 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.164775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.164967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.164992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.165144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.165303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.165328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.165460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.165627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.165655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.165789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.165977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.166003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.166146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.166305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.166330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.166520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.166658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.166685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.166845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.167172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.167500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.167693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.167862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.168194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.168519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.168838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.168995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.169159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.169514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.169838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.169999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.170024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.170161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.170324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.170351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.170548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.170716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.170742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.170894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.171198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.171511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.171857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.171995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.172181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.172477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.172797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.172949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.173143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.173276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.173302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.173449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.173609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.173641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.225 qpair failed and we were unable to recover it. 00:30:14.225 [2024-07-23 01:51:27.173769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.225 [2024-07-23 01:51:27.173902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.173927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.174102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.174394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.174735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.174924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.175059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.175416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.175753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.175967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.176114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.176442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.176773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.176946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.177079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.177428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.177796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.177960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.178118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.178437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.178788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.178976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.179139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.179328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.179352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.179516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.179680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.179706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.179854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.180191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.180522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.180857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.180992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.181018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.181198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.181365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.181391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.181558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.181712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.181738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.181884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.182259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.182598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.182764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.226 [2024-07-23 01:51:27.182911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.183089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.226 [2024-07-23 01:51:27.183113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.226 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.183280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.183415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.183442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.183603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.183751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.183776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.183939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.184320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.184656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.184824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.184987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.185353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.185658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.185812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.185960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.186277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.186572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.186738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.186878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.187195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.187557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.187849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.187985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.188141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.188500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.188829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.188978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.189142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.189463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.189796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.189983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.190137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.190326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.190350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.190514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.190645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.190670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.227 qpair failed and we were unable to recover it. 00:30:14.227 [2024-07-23 01:51:27.190840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.190990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.227 [2024-07-23 01:51:27.191015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.191160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.191308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.191337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.191481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.191646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.191671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.191837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.191994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.192018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.192195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.192325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.192349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.192530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.192689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.192715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.192855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.193208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.193538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.193839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.193977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.194003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.194163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.194321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.194345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.194521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.194683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.194713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.194846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.195196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.195522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.195697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.195862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.196178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.196514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.196836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.196993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.197156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.197516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.197821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.197985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.198180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.198356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.198381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.198568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.198763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.198788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.198941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.199325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.199638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.199818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.228 [2024-07-23 01:51:27.199962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.200090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.228 [2024-07-23 01:51:27.200114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.228 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.200275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.200421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.200446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.200583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.200745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.200771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.200905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.201225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.201563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.201854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.201999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.202158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.202458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.202813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.202977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.203139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.203465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.203796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.203998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.204166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.204293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.204318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.204482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.204672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.204697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.204849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.205203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.205519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.205837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.205999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.206161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.206482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.206800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.206956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.207104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.207406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.207769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.207957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.208094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.208250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.208274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.208449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.208642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.208668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.208832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.208981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.209007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.229 qpair failed and we were unable to recover it. 00:30:14.229 [2024-07-23 01:51:27.209141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.229 [2024-07-23 01:51:27.209263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.209288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.209420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.209563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.209590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.209769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.209911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.209936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.210100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.210470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.210815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.210974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.211105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.211397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.211768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.211980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.212147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.212450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.212760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.212917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.213079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.213368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.213706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.213865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.213998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.214319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.214672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.214831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.214967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.215281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.215571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.215763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.215903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.216235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.216541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.216711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.216845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.217202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.217562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.217739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.217878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.218029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.230 [2024-07-23 01:51:27.218054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.230 qpair failed and we were unable to recover it. 00:30:14.230 [2024-07-23 01:51:27.218189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.218524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.218837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.218994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.219141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.219425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.219715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.219897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.220043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.220392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.220764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.220922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.221092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.221417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.221747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.221903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.222080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.222404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.222763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.222918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.223093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.223434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.223786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.223937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.224085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.224394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.224695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.224910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.225055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.225347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.225696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.225875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.226018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.226352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.226656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.226815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.231 qpair failed and we were unable to recover it. 00:30:14.231 [2024-07-23 01:51:27.226956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.227092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.231 [2024-07-23 01:51:27.227119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.227293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.227436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.227461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.227597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.227761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.227787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.227952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.228291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.228648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.228835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.229005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.229346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.229657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.229840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.229976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.230327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.230663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.230840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.230971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.231334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.231710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.231862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.231995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.232374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.232696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.232887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.233017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.233366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.233653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.233804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.233956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.234289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.234635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.232 [2024-07-23 01:51:27.234791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.232 qpair failed and we were unable to recover it. 00:30:14.232 [2024-07-23 01:51:27.234927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.235281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.235556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.235729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.235858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.236232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.236635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.236807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.236940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.237253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.237599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.237773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.237938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.238226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.238580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.238750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.238917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.239279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.239593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.239768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.239915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.240211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.240573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.240744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.240881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.241220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.241552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.241742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.241881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.242203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.242554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.242742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.242902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.243219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.243523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.243723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.233 qpair failed and we were unable to recover it. 00:30:14.233 [2024-07-23 01:51:27.243894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.244036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.233 [2024-07-23 01:51:27.244061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.244221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.244387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.244417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.244553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.244710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.244736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.244892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.245241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.245587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.245782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.245934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.246251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.246569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.246737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.246906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.247217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.247507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.247699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.247863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.248174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.248487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.248834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.248990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.249154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.249493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.249831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.249994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.250151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.250311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.250336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.250476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.250641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.250668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.250827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.250987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.251208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.251538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.251834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.251999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.252183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.252515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.252849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.252998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.234 qpair failed and we were unable to recover it. 00:30:14.234 [2024-07-23 01:51:27.253159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.234 [2024-07-23 01:51:27.253317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.253342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.253511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.253655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.253683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.253876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.254174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.254481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.254830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.254993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.255130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.255323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.255348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.255482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.255667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.255693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.255853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.256205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.256516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.256700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.256859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.257234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.257559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.257790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.257940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.258282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.258585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.258753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.258916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.259248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.259577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.259787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.259918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.260260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.260633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.260805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.260961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.261283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.261629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.261811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.261994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.262182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.262207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.235 [2024-07-23 01:51:27.262338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.262501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.235 [2024-07-23 01:51:27.262526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.235 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.262709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.262866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.262891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.263049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.263373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.263709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.263874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.264054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.264379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.264679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.264835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.264992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.265279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.265588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.265805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.265976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.266350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.266697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.266889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.267024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.267343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.267654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.267817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.267961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.268283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.268597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.268822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.268966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.269325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.269657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.269828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.269979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.270264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.270584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.270746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.270883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.271036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.271061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.271254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.271387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.271413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.236 qpair failed and we were unable to recover it. 00:30:14.236 [2024-07-23 01:51:27.271607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.236 [2024-07-23 01:51:27.271773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.271797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.271933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.272244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.272578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.272757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.272898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.273244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.273548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.273736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.273891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.274237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.274588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.274761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.274903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.275221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.275567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.275756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.275910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.276270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.276638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.276822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.276956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.277286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.277618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.277791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.277963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.278305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.278627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.278784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.237 [2024-07-23 01:51:27.278931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.279092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.237 [2024-07-23 01:51:27.279116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.237 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.279255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.279411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.279436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.279600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.279748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.279773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.279953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.280270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.280603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.280803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.280978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.281298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.281666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.281872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.282018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.282393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.282777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.282976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.283122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.283288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.283323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.283501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.283686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.283718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.283865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.284193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.284599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.284783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.284951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.285158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.285186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.238 [2024-07-23 01:51:27.285334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.285500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.238 [2024-07-23 01:51:27.285527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.238 qpair failed and we were unable to recover it. 00:30:14.513 [2024-07-23 01:51:27.285673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.513 [2024-07-23 01:51:27.285815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.513 [2024-07-23 01:51:27.285841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.513 qpair failed and we were unable to recover it. 00:30:14.513 [2024-07-23 01:51:27.285987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.513 [2024-07-23 01:51:27.286135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.286161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.286351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.286535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.286571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.286733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.286880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.286917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.287073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.287239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.287277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.287435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.287624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.287662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.287824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.287976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.288010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.288193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.288375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.288408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.288593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.288767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.288795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.288955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.289315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.289669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.289840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.290028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.290338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.290699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.290864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.291031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.291396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.291701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.291905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.292041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.292394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.292717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.292893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.293060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.293373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.293733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.293903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.294062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.294389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.294719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.294894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.295052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.295183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.295208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.514 qpair failed and we were unable to recover it. 00:30:14.514 [2024-07-23 01:51:27.295355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.295487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.514 [2024-07-23 01:51:27.295516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.295679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.295813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.295838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.295969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.296275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.296636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.296832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.297008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.297354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.297679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.297848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.298022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.298348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.298671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.298878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.299022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.299328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.299660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.299820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.299973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.300290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.300622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.300785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.300932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.301321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.301675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.301840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.301971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.302307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.302654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.302830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.303001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.303377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.303732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.303920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.304060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.304191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.304216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.304378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.304519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.304544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.515 qpair failed and we were unable to recover it. 00:30:14.515 [2024-07-23 01:51:27.304695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.515 [2024-07-23 01:51:27.304849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.304874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.305039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.305377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.305711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.305894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.306058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.306362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.306655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.306809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.306937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.307317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.307688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.307866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.308067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.308366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.308665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.308820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.308991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.309342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.309668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.309841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.309974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.310303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.310595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.310784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.310946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.311271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.311626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.311816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.311955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.312325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.312640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.312833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.312966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.313324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.516 [2024-07-23 01:51:27.313664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.516 [2024-07-23 01:51:27.313817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.516 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.313991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.314325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.314631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.314783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.314977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.315410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.315748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.315910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.316075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.316432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.316785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.316948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.317097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.317416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.317797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.317991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.318125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.318455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.318825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.318992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.319130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.319298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.319322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.319488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.319627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.319653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.319834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.319982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.320008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.320199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.320354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.320379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.320568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.320716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.320741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.320922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.321253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.321550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.321736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.321887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.322205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.322564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.322755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.322949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.323106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.323131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.517 qpair failed and we were unable to recover it. 00:30:14.517 [2024-07-23 01:51:27.323288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.517 [2024-07-23 01:51:27.323417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.323443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.323634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.323764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.323789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.323946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.324285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.324641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.324799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.324930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.325275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.325626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.325828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.325978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.326313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.326643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.326846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.326979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.327325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.327697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.327848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.328038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.328368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.328672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.328849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.328994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.329278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.329644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.329831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.329957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.330278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.330633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.330824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.518 qpair failed and we were unable to recover it. 00:30:14.518 [2024-07-23 01:51:27.330982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.518 [2024-07-23 01:51:27.331143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.331167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.331320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.331498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.331522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.331699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.331891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.331917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.332054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.332408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.332712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.332884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.333035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.333366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.333664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.333824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.333986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.334350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.334674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.334857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.335020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.335329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.335649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.335842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.335980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.336332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.336629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.336833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.336998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.337328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.337684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.337847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.337978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.338270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.338625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.338809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.338982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.339375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.339731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.339926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.519 [2024-07-23 01:51:27.340078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.340242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.519 [2024-07-23 01:51:27.340267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.519 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.340438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.340566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.340592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.340762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.340927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.340952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.341100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.341289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.341314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.341479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.341644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.341671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.341834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.341987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.342146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.342464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.342786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.342950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.343088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.343409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.343711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.343904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.344053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.344438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.344786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.344971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.345110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.345420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.345755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.345950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.346144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.346315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.346339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.346500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.346662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.346688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.346822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.346986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.347011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.347193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.347365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.347389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.347538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.347676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.347701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.347858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.348178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.348535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.348717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.348851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.349025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.349051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.349193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.349332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.520 [2024-07-23 01:51:27.349357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.520 qpair failed and we were unable to recover it. 00:30:14.520 [2024-07-23 01:51:27.349487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.349641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.349672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.349825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.350187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.350492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.350713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.350843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.351196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.351508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.351698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.351863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.352221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.352552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.352739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.352904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.353199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.353555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.353744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.353880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.354169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.354518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.354818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.354990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.355135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.355311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.355336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.355486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.355676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.355701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.355873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.356158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.356495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.356805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.356974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.357147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.357302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.357327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.357467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.357639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.357665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.357836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.357978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.358003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.358184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.358318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.358343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.521 qpair failed and we were unable to recover it. 00:30:14.521 [2024-07-23 01:51:27.358504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.358639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.521 [2024-07-23 01:51:27.358665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.358821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.358992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.359151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.359460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.359777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.359935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.360097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.360456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.360750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.360930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.361056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.361425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.361763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.361950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.362141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.362458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.362798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.362949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.363115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.363471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.363825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.363986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.364135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.364486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.364816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.364987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.365119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.365249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.365274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.365462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.365601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.365646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.365786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.365978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.366138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.366457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.366782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.366997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.367190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.367354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.367379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.367523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.367700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.367725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.522 qpair failed and we were unable to recover it. 00:30:14.522 [2024-07-23 01:51:27.367892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.522 [2024-07-23 01:51:27.368019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.368044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.368175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.368361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.368385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.368550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.368686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.368712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.368847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.369159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.369497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.369811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.369971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.370120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.370496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.370809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.370968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.371133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.371415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.371740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.371927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.372061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.372413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.372768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.372928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.373078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.373417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.373770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.373924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.374066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.374414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.374741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.374936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.375075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.375236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.375261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.523 qpair failed and we were unable to recover it. 00:30:14.523 [2024-07-23 01:51:27.375397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.375554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.523 [2024-07-23 01:51:27.375579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.375732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.375895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.375920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.376091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.376443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.376752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.376920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.377087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.377431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.377728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.377920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.378085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.378411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.378761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.378953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.379126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.379285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.379310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.379480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.379645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.379672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.379839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.379985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.380012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.380171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.380302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.380327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.380487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.380665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.380690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.380844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.381216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.381535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.381724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.381852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.382206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.382549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.382744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.382880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.383202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.383548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.383720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.383877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.384172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.384494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.524 [2024-07-23 01:51:27.384681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.524 qpair failed and we were unable to recover it. 00:30:14.524 [2024-07-23 01:51:27.384829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.384997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.385022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.385182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.385357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.385382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.385525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.385681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.385706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.385877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.386199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.386484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.386849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.386982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.387176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.387488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.387828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.387997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.388161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.388455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.388777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.388964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.389108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.389431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.389794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.389951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.390143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.390498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.390803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.390976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.391114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.391427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.391786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.391949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.392094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.392399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.392764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.392925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.393053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.393382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.525 [2024-07-23 01:51:27.393705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.525 [2024-07-23 01:51:27.393873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.525 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.394009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.394349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.394720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.394904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.395053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.395415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.395766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.395948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.396546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.396755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.396783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.396930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.397277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.397573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.397785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.397926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.398281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.398600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.398781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.398921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.399316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.399634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.399836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.399997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.400324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.400699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.400874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.401055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.401360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.401707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.401864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.402014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.402342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.402691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.402846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.402981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.403114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.403139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.403309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.403446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.403471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.526 qpair failed and we were unable to recover it. 00:30:14.526 [2024-07-23 01:51:27.403653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.526 [2024-07-23 01:51:27.403818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.403850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.404022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.404331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.404698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.404857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.404995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.405301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.405641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.405829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.405972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.406131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.406158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.406687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.406872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.406899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.407046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.407412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.407790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.407980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.408186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.408515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.408811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.408969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.409117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.409477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.409818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.409979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.410141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.410442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.410776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.410964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.411103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.411446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.411794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.411981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.412122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.412446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.412784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.412942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.527 qpair failed and we were unable to recover it. 00:30:14.527 [2024-07-23 01:51:27.413129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.527 [2024-07-23 01:51:27.413261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.413286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.413493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.413647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.413674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.413820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.413957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.413992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.414151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.414433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.414751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.414968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.415134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.415270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.415295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.415487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.415653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.415679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.415816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.415986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.416165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.416487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.416811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.416972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.417111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.417299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.417324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.417514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.417687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.417713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.417877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.418210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.418562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.418732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.418912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.419238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.419555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.419753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.419889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.420190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.420526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.420686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.528 qpair failed and we were unable to recover it. 00:30:14.528 [2024-07-23 01:51:27.420847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.528 [2024-07-23 01:51:27.421030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.421240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.421550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.421857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.421993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.422152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.422508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.422804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.422974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.423108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.423475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.423787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.423945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.424084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.424452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.424775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.424946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.425088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.425426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.425814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.425976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.426149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.426447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.426805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.426979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.427170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.427514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.427818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.427975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.428145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.428305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.428330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.428471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.428642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.428669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.428840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.428984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.429172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.429502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.429813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.529 [2024-07-23 01:51:27.429997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.529 qpair failed and we were unable to recover it. 00:30:14.529 [2024-07-23 01:51:27.430167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.430325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.430361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.430533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.430665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.430692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.430856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.431220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.431543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.431839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.431978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.432169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.432490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.432818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.432989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.433014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.433206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.433336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.433361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.433525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.433683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.433710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.433841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.434170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.434527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.434730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.434864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.435236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.435565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.435721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.435862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.436218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.436582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.436756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.436939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.437256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.437624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.437824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.437959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.438311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.438644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.438828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.439022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.439166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.530 [2024-07-23 01:51:27.439191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.530 qpair failed and we were unable to recover it. 00:30:14.530 [2024-07-23 01:51:27.439377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.439516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.439541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.439689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.439854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.439881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.440059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.440376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.440726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.440947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.441122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.441289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.441315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.441476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.441627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.441653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.441794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.441992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.442206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.442511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.442835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.442992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.443156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.443482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.443812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.443999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.444127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.444447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.444795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.444981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.445120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.445286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.445311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.445470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.445632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.445658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.445854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.446157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.446521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.446683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.446860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.447205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.447514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.447711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.447852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.448173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.531 qpair failed and we were unable to recover it. 00:30:14.531 [2024-07-23 01:51:27.448503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.531 [2024-07-23 01:51:27.448662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.448826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.448992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.449017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.449208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.449343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.449368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.449515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.449670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.449695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.449888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.450251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.450580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.450743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.450905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.451259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.451593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.451818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.451955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.452314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.452647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.452848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.452985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.453333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.453665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.453823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.453961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.454342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.454655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.454817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.454955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.455322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.455630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.455833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.455962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.456336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.456693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.456884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.457015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.457175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.457200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.532 qpair failed and we were unable to recover it. 00:30:14.532 [2024-07-23 01:51:27.457350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.457543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.532 [2024-07-23 01:51:27.457569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.457736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.457896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.457921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.458114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.458270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.458296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.458475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.458626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.458658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.458827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.459205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.459530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.459713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.459889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.460241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.460546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.460745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.460912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.461245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.461603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.461777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.461938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.462327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.462674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.462860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.463021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.463348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.463686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.463845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.464035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.464371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.464729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.464935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.465098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.465449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.465798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.465961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.533 [2024-07-23 01:51:27.466089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.466252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.533 [2024-07-23 01:51:27.466276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.533 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.466436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.466588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.466619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.466788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.466923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.466948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.467100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.467461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.467792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.467979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.468110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.468466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.468795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.468960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.469128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.469272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.469297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.469432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.469632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.469658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.469820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.469995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.470020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.470180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.470349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.470374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.470539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.470695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.470720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.470877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.471193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.471519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.471859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.471996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.472033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.472164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.472327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.472352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.472509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.472650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.472677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.472847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.473204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.473600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.473789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.473936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.474292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.474648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.474810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.474972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.475107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.475132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.475260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.475414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.475439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.534 qpair failed and we were unable to recover it. 00:30:14.534 [2024-07-23 01:51:27.475607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.534 [2024-07-23 01:51:27.475742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.475768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.475934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.476268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.476612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.476781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.476973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.477275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.477605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.477809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.478004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.478321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.478648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.478835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.479024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.479407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.479773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.479963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.480112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.480270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.480294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.480467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.480656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.480681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.480846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.480994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.481152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.481480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.481821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.481986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.482127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.482299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.482324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.482487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.482640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.482667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.482852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.483203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.483524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.483722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.483884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.484228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.484521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.484844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.484984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.535 [2024-07-23 01:51:27.485010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.535 qpair failed and we were unable to recover it. 00:30:14.535 [2024-07-23 01:51:27.485145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.485288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.485314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.485487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.485649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.485675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.485834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.486203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.486530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.486717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.486862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.487185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.487537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.487749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.487898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.488256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.488627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.488807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.488973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.489328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.489697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.489872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.490058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.490364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.490694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.490885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.491031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.491385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.491710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.491903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.492071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.492383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.492725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.492900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.493078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.493433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.493759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.493928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.494081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.494257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.494282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.536 qpair failed and we were unable to recover it. 00:30:14.536 [2024-07-23 01:51:27.494445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.494586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.536 [2024-07-23 01:51:27.494629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.494805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.494955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.494981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.495148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.495480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.495818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.495975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.496131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.496307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.496332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.496498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.496651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.496677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.496821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.496995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.497176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.497508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.497841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.497996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.498124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.498446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.498791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.498990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.499182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.499521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.499818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.499982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.500114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.500253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.500278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.500439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.500638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.500664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.500828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.500992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.501160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.501473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.501808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.501997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.502022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.502182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.502316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.537 [2024-07-23 01:51:27.502341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.537 qpair failed and we were unable to recover it. 00:30:14.537 [2024-07-23 01:51:27.502537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.502689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.502714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.502850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.502988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.503144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.503498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.503835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.503990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.504180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.504514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.504815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.504999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.505161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.505478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.505805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.505957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.506120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.506433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.506783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.506943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.507110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.507450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.507807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.507990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.508127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.508442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.508779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.508957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.509105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.509425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fceb8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.509805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.509993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.510136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.510467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.510803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.510967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.511117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.511254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.511280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.538 qpair failed and we were unable to recover it. 00:30:14.538 [2024-07-23 01:51:27.511415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.538 [2024-07-23 01:51:27.511574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.511599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.511759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.511905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.511930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.512095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.512393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.512683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.512837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.512988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.513295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.513653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.513841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.513996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.514294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.514592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.514759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.514928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.515252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.515544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.515721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.515878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.516260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.516566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.516751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.516885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.517235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.517543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.517717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.517854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.518223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.518527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.518841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.518978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.519148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.519480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.519782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.519947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.520118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.520253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.539 [2024-07-23 01:51:27.520277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.539 qpair failed and we were unable to recover it. 00:30:14.539 [2024-07-23 01:51:27.520420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.520579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.520603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.520760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.520902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.520927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.521056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.521351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.521654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.521855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.522004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.522354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.522669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.522842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.522979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.523281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.523594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.523770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.523910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.524247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.524562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.524740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.524877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.525190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.525540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.525750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.525896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.526212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.526504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.526687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.527093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.527221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.527245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.527412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.527577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.527603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.527748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 01:51:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:14.540 [2024-07-23 01:51:27.527886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.527910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 01:51:27 -- common/autotest_common.sh@852 -- # return 0 00:30:14.540 [2024-07-23 01:51:27.528068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 01:51:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:14.540 [2024-07-23 01:51:27.528215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.528240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 01:51:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:14.540 [2024-07-23 01:51:27.528370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.540 [2024-07-23 01:51:27.528554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.528579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.528730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.528880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.528905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.540 [2024-07-23 01:51:27.529046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.529220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.540 [2024-07-23 01:51:27.529245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.540 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.529393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.529556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.529581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.529741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.529878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.529903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.530062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.530361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.530694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.530881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.531048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.531365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.531675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.531865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.532021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.532363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.532710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.532885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.533064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.533385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.533760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.533919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.534052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.534377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.534707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.534901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.535089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.535406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.535755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.535926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.536073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.536402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.536757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.536919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.537088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.537432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.537794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.537956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.541 [2024-07-23 01:51:27.538091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.538247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.541 [2024-07-23 01:51:27.538273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.541 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.538433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.538593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.538627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.538766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.538895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.538920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.539099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.539432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.539795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.539947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.540078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.540399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.540711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.540902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.541052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.541399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.541737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.541898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.542069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.542403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.542758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.542939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.543069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.543408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.543731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.543924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.544098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.544486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.544825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.544983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.545133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.545458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.545778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.545948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.542 qpair failed and we were unable to recover it. 00:30:14.542 [2024-07-23 01:51:27.546085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.542 [2024-07-23 01:51:27.546216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.546246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.546406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.546567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.546593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.546777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.546911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.546936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.547101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.547269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.547295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.547469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.547631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.547669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.547843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.548200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.548526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.548727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.548876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.549047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.549073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.549251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 01:51:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:14.543 [2024-07-23 01:51:27.549406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.549433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 01:51:27 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:14.543 [2024-07-23 01:51:27.549601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.543 [2024-07-23 01:51:27.549772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.549798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.543 [2024-07-23 01:51:27.549947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.550291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.550624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.550785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.550976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.551267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.551600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.551770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.551922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.552208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.552558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.552718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.552881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.553232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.553627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.553818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.553960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.554301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.554731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.554907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.543 [2024-07-23 01:51:27.555069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.555238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.543 [2024-07-23 01:51:27.555264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.543 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.555402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.555560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.555587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.555731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.555873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.555898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.556037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.556391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.556721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.556907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.557072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.557417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.557756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.557952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.558113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.558278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.558305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.558476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.558631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.558668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.558811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.558979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.559006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.559133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.559267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.559295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.559430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.559688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.559715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.559920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.560282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.560580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.560784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.560949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.561412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.561709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.561898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.562109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.562465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.562797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.562958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.563103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.563406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.563714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.563886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.564040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.564202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.564229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.564380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.564516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.564541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.544 qpair failed and we were unable to recover it. 00:30:14.544 [2024-07-23 01:51:27.564715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.544 [2024-07-23 01:51:27.564849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.564875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.565061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.565359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.565681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.565870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.566021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.566377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.566712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.566909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.567085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.567402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.567706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.567895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.568093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.568222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.568248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.568435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.568595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.568651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.568836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.568974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.569000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.569193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.569326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.569353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.569553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.569691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.569718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.569908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.570208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.570534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.570724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.570866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.571179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.571513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.571831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.571999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.572146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.572473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.572790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.572973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.573142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.573476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.573817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.545 [2024-07-23 01:51:27.573983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.545 qpair failed and we were unable to recover it. 00:30:14.545 [2024-07-23 01:51:27.574125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.574483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.574780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.574938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.575119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.575269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.575294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.575425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.575601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.575634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.575793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.575988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.576015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.576180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 Malloc0 00:30:14.546 [2024-07-23 01:51:27.576365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.576391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.576527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.546 [2024-07-23 01:51:27.576679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.576707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.576860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 01:51:27 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:14.546 [2024-07-23 01:51:27.577011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.546 [2024-07-23 01:51:27.577037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.546 [2024-07-23 01:51:27.577200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.577361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.577387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.577522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.577680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.577706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.577853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.578223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.578559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.578736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.578898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.579264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.579574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.579958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.579983] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.546 [2024-07-23 01:51:27.580096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.580122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.580260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.580396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.580422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.580624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.580804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.580829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.581000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.581325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.581672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.581853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.582009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.582388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.582761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.582954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.546 qpair failed and we were unable to recover it. 00:30:14.546 [2024-07-23 01:51:27.583115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.583264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.546 [2024-07-23 01:51:27.583289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.583449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.583610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.583645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.583826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.583990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.584016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.584146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.584335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.584361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.584525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.584680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.584707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.584872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.585222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.585582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.585789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.585956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.586268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.586587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.586813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.586973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.587299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.587595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.587795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.587946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.588113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.547 [2024-07-23 01:51:27.588140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 01:51:27 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.547 [2024-07-23 01:51:27.588309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.547 [2024-07-23 01:51:27.588438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.588465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.547 [2024-07-23 01:51:27.588602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.588780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.588806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.588938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.589260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.589598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.589797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.589969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.590282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.590629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.547 [2024-07-23 01:51:27.590846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.547 qpair failed and we were unable to recover it. 00:30:14.547 [2024-07-23 01:51:27.591012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.591356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.591740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.591953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.592122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.592274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.592301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.592473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.592634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.592671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.592826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.592976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.593003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.593193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.593335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.593360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.593519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.593673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.593700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.593868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.594187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.594495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.594809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.594968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.595127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.595282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.548 [2024-07-23 01:51:27.595308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.548 qpair failed and we were unable to recover it. 00:30:14.548 [2024-07-23 01:51:27.595464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.595602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.595635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.595784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.595920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.595956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.596103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.808 01:51:27 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:14.808 [2024-07-23 01:51:27.596279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.596305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.808 [2024-07-23 01:51:27.596469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.596607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.596637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b9 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.808 0 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.596819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.597173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.597483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.597831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.597991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.598163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.598294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.598320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.598480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.598663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.598690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.808 qpair failed and we were unable to recover it. 00:30:14.808 [2024-07-23 01:51:27.598826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.808 [2024-07-23 01:51:27.598968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.598994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.599176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.599342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.599367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.599533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.599673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.599700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.599842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.599974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.600155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.600468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.600789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.600954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.601124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.601262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.601290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.601424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.601605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.601651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.601820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.601981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.602006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.602138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.602296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.602321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.602480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.602670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.602698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.602850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.603164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.603480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.603797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.603987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.604119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.809 [2024-07-23 01:51:27.604270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.604298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 01:51:27 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.809 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.809 [2024-07-23 01:51:27.604492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.809 [2024-07-23 01:51:27.604656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.604683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.604827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.605206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.605540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.605843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.605980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.606145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.606471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.606818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.606973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.607124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.607281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.607307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.607443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.607642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.809 [2024-07-23 01:51:27.607670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.809 qpair failed and we were unable to recover it. 00:30:14.809 [2024-07-23 01:51:27.607803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.810 [2024-07-23 01:51:27.607941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.810 [2024-07-23 01:51:27.607967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fcec8000b90 with addr=10.0.0.2, port=4420 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.608107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:14.810 [2024-07-23 01:51:27.608163] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.810 [2024-07-23 01:51:27.611265] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:30:14.810 [2024-07-23 01:51:27.611337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7fcec8000b90 (107): Transport endpoint is not connected 00:30:14.810 [2024-07-23 01:51:27.611409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.810 01:51:27 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.810 01:51:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:14.810 01:51:27 -- common/autotest_common.sh@10 -- # set +x 00:30:14.810 01:51:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:14.810 01:51:27 -- host/target_disconnect.sh@58 -- # wait 3907926 00:30:14.810 [2024-07-23 01:51:27.620701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.620868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.620897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.620914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.620928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.620974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.630657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.630806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.630843] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.630862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.630876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.630907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.640542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.640689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.640717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.640733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.640746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.640776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.650676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.650856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.650884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.650899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.650929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.650959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.660628] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.660774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.660802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.660817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.660831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.660861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.670668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.670812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.670838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.670854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.670868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.670903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.680678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.680834] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.680862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.680877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.680891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.680933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.690721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.690903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.690931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.690946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.690960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.690990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.700713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.700861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.700889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.700905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.700919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.700948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.710779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.710947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.710975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.710991] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.711004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.711034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.720773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.810 [2024-07-23 01:51:27.720918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.810 [2024-07-23 01:51:27.720950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.810 [2024-07-23 01:51:27.720966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.810 [2024-07-23 01:51:27.720980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.810 [2024-07-23 01:51:27.721024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.810 qpair failed and we were unable to recover it. 00:30:14.810 [2024-07-23 01:51:27.730836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.731031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.731058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.731073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.731087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.731117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.740877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.741018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.741045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.741060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.741073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.741102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.750878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.751067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.751094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.751109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.751123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.751165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.760902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.761054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.761081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.761099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.761119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.761162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.771802] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.771960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.771987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.772002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.772015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.772045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.781039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.781220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.781247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.781263] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.781276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.781317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.791030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.791180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.791207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.791223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.791236] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.791266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.801051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.801191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.801216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.801231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.801245] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.801274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.811098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.811253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.811280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.811295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.811308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.811337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.821103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.821247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.821274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.821288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.821301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.821330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.831146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.831284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.831308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.831323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.831335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.831365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.841139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.841316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.841343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.841359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.841372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.841403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.851156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.811 [2024-07-23 01:51:27.851301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.811 [2024-07-23 01:51:27.851328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.811 [2024-07-23 01:51:27.851342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.811 [2024-07-23 01:51:27.851361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.811 [2024-07-23 01:51:27.851392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.811 qpair failed and we were unable to recover it. 00:30:14.811 [2024-07-23 01:51:27.861205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.812 [2024-07-23 01:51:27.861384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.812 [2024-07-23 01:51:27.861410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.812 [2024-07-23 01:51:27.861426] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.812 [2024-07-23 01:51:27.861439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.812 [2024-07-23 01:51:27.861468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.812 qpair failed and we were unable to recover it. 00:30:14.812 [2024-07-23 01:51:27.871195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.812 [2024-07-23 01:51:27.871332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.812 [2024-07-23 01:51:27.871357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.812 [2024-07-23 01:51:27.871372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.812 [2024-07-23 01:51:27.871386] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.812 [2024-07-23 01:51:27.871416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.812 qpair failed and we were unable to recover it. 00:30:14.812 [2024-07-23 01:51:27.881253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.812 [2024-07-23 01:51:27.881404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.812 [2024-07-23 01:51:27.881430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.812 [2024-07-23 01:51:27.881445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.812 [2024-07-23 01:51:27.881459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.812 [2024-07-23 01:51:27.881504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.812 qpair failed and we were unable to recover it. 00:30:14.812 [2024-07-23 01:51:27.891281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.812 [2024-07-23 01:51:27.891436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.812 [2024-07-23 01:51:27.891464] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.812 [2024-07-23 01:51:27.891479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.812 [2024-07-23 01:51:27.891492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.812 [2024-07-23 01:51:27.891534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.812 qpair failed and we were unable to recover it. 00:30:14.812 [2024-07-23 01:51:27.901310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.812 [2024-07-23 01:51:27.901463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.812 [2024-07-23 01:51:27.901491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.812 [2024-07-23 01:51:27.901507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.812 [2024-07-23 01:51:27.901520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:14.812 [2024-07-23 01:51:27.901550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:14.812 qpair failed and we were unable to recover it. 00:30:15.072 [2024-07-23 01:51:27.911364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.072 [2024-07-23 01:51:27.911543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.072 [2024-07-23 01:51:27.911569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.072 [2024-07-23 01:51:27.911585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.072 [2024-07-23 01:51:27.911598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.072 [2024-07-23 01:51:27.911634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.072 qpair failed and we were unable to recover it. 00:30:15.072 [2024-07-23 01:51:27.921355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.072 [2024-07-23 01:51:27.921502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.072 [2024-07-23 01:51:27.921531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.072 [2024-07-23 01:51:27.921548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.072 [2024-07-23 01:51:27.921577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.072 [2024-07-23 01:51:27.921608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.072 qpair failed and we were unable to recover it. 00:30:15.072 [2024-07-23 01:51:27.931402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.072 [2024-07-23 01:51:27.931583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.072 [2024-07-23 01:51:27.931610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.072 [2024-07-23 01:51:27.931633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.072 [2024-07-23 01:51:27.931646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.072 [2024-07-23 01:51:27.931676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.072 qpair failed and we were unable to recover it. 00:30:15.072 [2024-07-23 01:51:27.941420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.072 [2024-07-23 01:51:27.941558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.072 [2024-07-23 01:51:27.941585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.072 [2024-07-23 01:51:27.941605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.072 [2024-07-23 01:51:27.941627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.072 [2024-07-23 01:51:27.941658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.072 qpair failed and we were unable to recover it. 00:30:15.072 [2024-07-23 01:51:27.951498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:27.951649] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:27.951676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:27.951691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:27.951705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:27.951736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:27.961485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:27.961642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:27.961668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:27.961683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:27.961697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:27.961727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:27.971549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:27.971708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:27.971736] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:27.971752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:27.971765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:27.971795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:27.981549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:27.981698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:27.981726] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:27.981741] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:27.981755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:27.981785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:27.991581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:27.991737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:27.991764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:27.991779] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:27.991792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:27.991822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.001631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.001779] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.001810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.001827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.001841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.001872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.011636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.011789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.011816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.011836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.011851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.011893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.021695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.021839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.021876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.021891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.021904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.021948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.031670] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.031855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.031882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.031902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.031917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.031948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.041745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.041888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.041914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.041929] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.041941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.041972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.051727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.051865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.051891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.051905] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.051919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.051949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.061774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.061918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.061944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.061959] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.061972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.062003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.073 [2024-07-23 01:51:28.071787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.073 [2024-07-23 01:51:28.071937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.073 [2024-07-23 01:51:28.071962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.073 [2024-07-23 01:51:28.071977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.073 [2024-07-23 01:51:28.071990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.073 [2024-07-23 01:51:28.072021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.073 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.081859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.082033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.082059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.082074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.082087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.082117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.091896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.092081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.092107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.092121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.092136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.092166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.101885] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.102046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.102073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.102088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.102103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.102133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.111883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.112025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.112051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.112066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.112081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.112111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.121926] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.122121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.122153] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.122169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.122183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.122213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.131941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.132088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.132114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.132128] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.132143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.132172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.141984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.142134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.142160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.142174] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.142188] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.142218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.152083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.152240] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.152266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.152280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.152295] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.152324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.074 [2024-07-23 01:51:28.162059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.074 [2024-07-23 01:51:28.162209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.074 [2024-07-23 01:51:28.162235] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.074 [2024-07-23 01:51:28.162249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.074 [2024-07-23 01:51:28.162263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.074 [2024-07-23 01:51:28.162299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.074 qpair failed and we were unable to recover it. 00:30:15.333 [2024-07-23 01:51:28.172089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.333 [2024-07-23 01:51:28.172257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.333 [2024-07-23 01:51:28.172283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.333 [2024-07-23 01:51:28.172298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.333 [2024-07-23 01:51:28.172311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.333 [2024-07-23 01:51:28.172340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.333 qpair failed and we were unable to recover it. 00:30:15.333 [2024-07-23 01:51:28.182094] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.333 [2024-07-23 01:51:28.182253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.333 [2024-07-23 01:51:28.182279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.333 [2024-07-23 01:51:28.182294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.333 [2024-07-23 01:51:28.182308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.333 [2024-07-23 01:51:28.182340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.333 qpair failed and we were unable to recover it. 00:30:15.333 [2024-07-23 01:51:28.192183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.333 [2024-07-23 01:51:28.192350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.333 [2024-07-23 01:51:28.192376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.333 [2024-07-23 01:51:28.192391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.333 [2024-07-23 01:51:28.192405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.333 [2024-07-23 01:51:28.192435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.333 qpair failed and we were unable to recover it. 00:30:15.333 [2024-07-23 01:51:28.202180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.333 [2024-07-23 01:51:28.202330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.333 [2024-07-23 01:51:28.202356] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.333 [2024-07-23 01:51:28.202371] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.333 [2024-07-23 01:51:28.202385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.333 [2024-07-23 01:51:28.202415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.333 qpair failed and we were unable to recover it. 00:30:15.333 [2024-07-23 01:51:28.212229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.333 [2024-07-23 01:51:28.212405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.333 [2024-07-23 01:51:28.212437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.333 [2024-07-23 01:51:28.212454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.212485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.212516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.222243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.222391] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.222418] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.222432] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.222446] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.222476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.232241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.232381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.232407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.232421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.232435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.232465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.242310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.242463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.242489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.242503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.242517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.242560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.252310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.252463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.252490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.252504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.252522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.252553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.262320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.262466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.262492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.262507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.262521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.262551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.272370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.272521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.272547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.272562] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.272576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.272605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.282396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.282570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.282598] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.282621] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.282638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.282669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.292407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.292556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.292583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.292597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.292611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.292652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.302433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.302593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.302627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.302644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.302658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.302687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.312465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.312611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.312644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.312659] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.312673] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.312702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.322521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.322701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.322730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.322745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.322758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.322789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.332541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.332695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.332723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.334 [2024-07-23 01:51:28.332738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.334 [2024-07-23 01:51:28.332751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.334 [2024-07-23 01:51:28.332794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.334 qpair failed and we were unable to recover it. 00:30:15.334 [2024-07-23 01:51:28.342557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.334 [2024-07-23 01:51:28.342711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.334 [2024-07-23 01:51:28.342737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.342752] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.342771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.342802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.352584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.352761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.352787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.352802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.352816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.352846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.362648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.362807] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.362833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.362848] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.362862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.362891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.372658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.372810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.372836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.372851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.372865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.372909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.382685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.382846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.382871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.382886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.382900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.382945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.392699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.392848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.392875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.392889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.392903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.392933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.402742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.402888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.402918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.402933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.402947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.402981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.412786] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.412939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.412965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.412980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.412993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.413039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.335 [2024-07-23 01:51:28.422783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.335 [2024-07-23 01:51:28.422930] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.335 [2024-07-23 01:51:28.422956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.335 [2024-07-23 01:51:28.422971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.335 [2024-07-23 01:51:28.422985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.335 [2024-07-23 01:51:28.423014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.335 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.432869] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.433018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.433044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.433065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.433080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.433111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.442853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.443041] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.443067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.443081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.443096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.443125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.452863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.453017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.453043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.453057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.453071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.453100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.462880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.463050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.463076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.463091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.463105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.463134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.472958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.473101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.473126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.473141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.473155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.473185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.482978] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.483138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.483163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.483178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.483192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.483221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.492987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.493129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.493154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.493169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.493182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.493211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.502989] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.594 [2024-07-23 01:51:28.503139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.594 [2024-07-23 01:51:28.503164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.594 [2024-07-23 01:51:28.503179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.594 [2024-07-23 01:51:28.503193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.594 [2024-07-23 01:51:28.503237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.594 qpair failed and we were unable to recover it. 00:30:15.594 [2024-07-23 01:51:28.513036] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.513179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.513204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.513219] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.513232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.513263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.523116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.523304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.523331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.523352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.523382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.523413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.533116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.533288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.533314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.533328] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.533342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.533371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.543165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.543313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.543340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.543354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.543366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.543395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.553169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.553316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.553342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.553356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.553370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.553411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.563208] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.563360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.563392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.563407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.563420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.563449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.573200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.573351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.573376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.573391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.573405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.573434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.583255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.583398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.583434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.583449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.583463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.583493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.593257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.593399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.593425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.593440] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.593453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.593483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.603363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.603515] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.603541] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.603556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.603570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.603600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.613329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.613488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.613517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.613533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.613545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.613574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.623345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.623539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.623565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.623579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.623593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.623633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.595 qpair failed and we were unable to recover it. 00:30:15.595 [2024-07-23 01:51:28.633411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.595 [2024-07-23 01:51:28.633588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.595 [2024-07-23 01:51:28.633625] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.595 [2024-07-23 01:51:28.633643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.595 [2024-07-23 01:51:28.633657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.595 [2024-07-23 01:51:28.633687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.596 [2024-07-23 01:51:28.643413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.596 [2024-07-23 01:51:28.643563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.596 [2024-07-23 01:51:28.643589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.596 [2024-07-23 01:51:28.643610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.596 [2024-07-23 01:51:28.643631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.596 [2024-07-23 01:51:28.643662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.596 [2024-07-23 01:51:28.653447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.596 [2024-07-23 01:51:28.653610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.596 [2024-07-23 01:51:28.653643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.596 [2024-07-23 01:51:28.653658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.596 [2024-07-23 01:51:28.653672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.596 [2024-07-23 01:51:28.653708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.596 [2024-07-23 01:51:28.663479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.596 [2024-07-23 01:51:28.663685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.596 [2024-07-23 01:51:28.663711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.596 [2024-07-23 01:51:28.663727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.596 [2024-07-23 01:51:28.663740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.596 [2024-07-23 01:51:28.663770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.596 [2024-07-23 01:51:28.673514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.596 [2024-07-23 01:51:28.673717] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.596 [2024-07-23 01:51:28.673743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.596 [2024-07-23 01:51:28.673759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.596 [2024-07-23 01:51:28.673773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.596 [2024-07-23 01:51:28.673802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.596 [2024-07-23 01:51:28.683547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.596 [2024-07-23 01:51:28.683747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.596 [2024-07-23 01:51:28.683772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.596 [2024-07-23 01:51:28.683787] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.596 [2024-07-23 01:51:28.683801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.596 [2024-07-23 01:51:28.683831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.596 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.693588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.693769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.693796] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.693811] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.693825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.693856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.703635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.703783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.703814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.703829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.703843] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.703873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.713690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.713829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.713855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.713870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.713884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.713913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.723692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.723844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.723870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.723885] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.723898] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.723927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.733735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.733886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.733911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.733926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.733940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.733969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.743743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.743916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.743942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.743956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.743970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.744006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.753753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.753910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.753936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.753951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.753964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.753995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.763778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.763931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.763956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.763971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.763984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.764015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.773816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.774017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.774044] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.774059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.774073] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.774114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.783819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.783972] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.783998] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.784013] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.784027] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.784056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.793859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.794012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.794043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.794058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.794072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.794102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.803941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.804118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.804143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.804158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.804172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.804201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.813930] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.814083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.814110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.814124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.814141] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.854 [2024-07-23 01:51:28.814182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.854 qpair failed and we were unable to recover it. 00:30:15.854 [2024-07-23 01:51:28.823925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.854 [2024-07-23 01:51:28.824083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.854 [2024-07-23 01:51:28.824109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.854 [2024-07-23 01:51:28.824124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.854 [2024-07-23 01:51:28.824138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.824168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.834011] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.834205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.834231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.834246] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.834265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.834307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.844018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.844160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.844186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.844201] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.844214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.844244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.854079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.854222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.854249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.854264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.854277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.854306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.864056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.864197] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.864223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.864238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.864251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.864280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.874088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.874238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.874265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.874280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.874293] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.874322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.884193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.884360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.884387] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.884402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.884416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.884446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.894150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.894291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.894318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.894334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.894347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.894388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.904204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.904386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.904412] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.904428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.904442] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.904472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.914241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.914380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.914406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.914421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.914435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.914464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.924319] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.924466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.924493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.924513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.924527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.924557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.934266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.934407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.934434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.934450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.934463] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.934493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:15.855 [2024-07-23 01:51:28.944306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.855 [2024-07-23 01:51:28.944454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.855 [2024-07-23 01:51:28.944481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.855 [2024-07-23 01:51:28.944496] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.855 [2024-07-23 01:51:28.944509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:15.855 [2024-07-23 01:51:28.944551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:15.855 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-23 01:51:28.954353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.113 [2024-07-23 01:51:28.954544] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.113 [2024-07-23 01:51:28.954570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.113 [2024-07-23 01:51:28.954586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.113 [2024-07-23 01:51:28.954599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.113 [2024-07-23 01:51:28.954635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.113 qpair failed and we were unable to recover it. 00:30:16.113 [2024-07-23 01:51:28.964397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.113 [2024-07-23 01:51:28.964542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.113 [2024-07-23 01:51:28.964569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.113 [2024-07-23 01:51:28.964584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:28.964597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:28.964634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:28.974418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:28.974561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:28.974588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:28.974604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:28.974624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:28.974656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:28.984443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:28.984591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:28.984623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:28.984641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:28.984655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:28.984685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:28.994500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:28.994652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:28.994680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:28.994695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:28.994708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:28.994738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.004502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.004653] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.004679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.004694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.004708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.004737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.014531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.014687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.014713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.014733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.014748] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.014777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.024534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.024678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.024704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.024719] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.024733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.024762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.034598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.034752] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.034779] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.034795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.034808] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.034838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.044619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.044761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.044788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.044803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.044816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.044846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.054633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.054772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.054798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.054813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.054826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.054856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.064657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.064801] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.064828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.064843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.064857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.064887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.074683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.074826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.074854] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.074869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.074882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.074912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.114 qpair failed and we were unable to recover it. 00:30:16.114 [2024-07-23 01:51:29.084747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.114 [2024-07-23 01:51:29.084892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.114 [2024-07-23 01:51:29.084920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.114 [2024-07-23 01:51:29.084935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.114 [2024-07-23 01:51:29.084952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.114 [2024-07-23 01:51:29.084997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.094794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.094985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.095012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.095027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.095041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.095082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.104814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.105009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.105055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.105071] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.105083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.105126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.114796] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.114938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.114966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.114981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.114995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.115024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.124853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.124994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.125021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.125037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.125050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.125091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.134949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.135090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.135117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.135132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.135145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.135175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.144911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.145057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.145083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.145098] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.145111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.145146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.154929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.155067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.155094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.155110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.155123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.155153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.164952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.165110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.165137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.165156] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.165184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.165214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.174986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.175143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.175170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.175185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.175198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.175228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.185059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.185207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.185236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.185255] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.185269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.185315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.195050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.195184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.195216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.195232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.195246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.195276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.115 [2024-07-23 01:51:29.205093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.115 [2024-07-23 01:51:29.205248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.115 [2024-07-23 01:51:29.205275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.115 [2024-07-23 01:51:29.205290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.115 [2024-07-23 01:51:29.205303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.115 [2024-07-23 01:51:29.205332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.115 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.215184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.215331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.374 [2024-07-23 01:51:29.215359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.374 [2024-07-23 01:51:29.215375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.374 [2024-07-23 01:51:29.215404] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.374 [2024-07-23 01:51:29.215434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.374 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.225177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.225313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.374 [2024-07-23 01:51:29.225341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.374 [2024-07-23 01:51:29.225357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.374 [2024-07-23 01:51:29.225370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.374 [2024-07-23 01:51:29.225400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.374 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.235178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.235317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.374 [2024-07-23 01:51:29.235346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.374 [2024-07-23 01:51:29.235361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.374 [2024-07-23 01:51:29.235375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.374 [2024-07-23 01:51:29.235410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.374 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.245279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.245444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.374 [2024-07-23 01:51:29.245470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.374 [2024-07-23 01:51:29.245485] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.374 [2024-07-23 01:51:29.245514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.374 [2024-07-23 01:51:29.245544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.374 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.255224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.255367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.374 [2024-07-23 01:51:29.255394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.374 [2024-07-23 01:51:29.255410] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.374 [2024-07-23 01:51:29.255423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.374 [2024-07-23 01:51:29.255453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.374 qpair failed and we were unable to recover it. 00:30:16.374 [2024-07-23 01:51:29.265284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.374 [2024-07-23 01:51:29.265424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.265452] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.265467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.265480] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.265526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.275303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.275463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.275491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.275510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.275524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.275556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.285314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.285459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.285490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.285506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.285520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.285550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.295352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.295494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.295521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.295537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.295550] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.295580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.305400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.305578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.305604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.305627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.305642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.305674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.315408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.315547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.315575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.315590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.315604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.315654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.325441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.325585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.325611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.325635] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.325661] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.325691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.335468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.335608] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.335639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.335654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.335668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.335697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.345501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.345645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.345670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.345685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.345698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.345729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.355526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.355671] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.355697] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.355712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.355725] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.355756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.365602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.365758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.365787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.365803] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.365817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.365847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.375600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.375772] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.375801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.375816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.375831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.375861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.385665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.375 [2024-07-23 01:51:29.385818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.375 [2024-07-23 01:51:29.385845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.375 [2024-07-23 01:51:29.385861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.375 [2024-07-23 01:51:29.385875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.375 [2024-07-23 01:51:29.385905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.375 qpair failed and we were unable to recover it. 00:30:16.375 [2024-07-23 01:51:29.395692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.395829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.395855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.395870] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.395883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.395913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.405694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.405835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.405873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.405887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.405900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.405944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.415734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.415922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.415949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.415964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.415984] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.416017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.425815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.425995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.426021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.426036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.426050] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.426080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.435778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.435939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.435966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.435981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.435994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.436025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.445806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.445948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.445975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.445990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.446004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.446046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.455821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.455955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.455982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.455997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.456010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.456040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.376 [2024-07-23 01:51:29.465865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.376 [2024-07-23 01:51:29.466003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.376 [2024-07-23 01:51:29.466030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.376 [2024-07-23 01:51:29.466045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.376 [2024-07-23 01:51:29.466058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.376 [2024-07-23 01:51:29.466088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.376 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.475956] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.476137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.476166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.476183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.476200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.476242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.485910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.486053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.486079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.486095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.486108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.486139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.495954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.496095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.496121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.496136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.496150] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.496179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.506053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.506205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.506231] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.506254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.506268] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.506314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.516064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.516239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.516266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.516281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.516294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.516323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.526099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.526249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.526276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.526291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.526320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.526350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.536089] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.536241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.536268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.536283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.536297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.536326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.546088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.546229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.546256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.635 [2024-07-23 01:51:29.546271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.635 [2024-07-23 01:51:29.546284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.635 [2024-07-23 01:51:29.546313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.635 qpair failed and we were unable to recover it. 00:30:16.635 [2024-07-23 01:51:29.556134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.635 [2024-07-23 01:51:29.556270] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.635 [2024-07-23 01:51:29.556297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.556313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.556326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.556355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.566180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.566324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.566350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.566366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.566379] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.566408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.576229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.576405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.576432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.576462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.576476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.576505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.586275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.586464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.586505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.586520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.586533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.586588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.596278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.596418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.596445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.596465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.596479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.596509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.606269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.606424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.606450] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.606465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.606478] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.606508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.616318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.616474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.616500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.616514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.616526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.616570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.626381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.626553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.626580] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.626596] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.626610] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.626647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.636374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.636542] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.636569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.636585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.636598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.636633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.646384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.646525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.646551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.646566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.646579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.646609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.656396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.656539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.656566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.656581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.656594] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.656644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.666484] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.666640] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.666669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.666684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.666697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.666727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.676610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.676759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.636 [2024-07-23 01:51:29.676785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.636 [2024-07-23 01:51:29.676800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.636 [2024-07-23 01:51:29.676813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.636 [2024-07-23 01:51:29.676843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.636 qpair failed and we were unable to recover it. 00:30:16.636 [2024-07-23 01:51:29.686506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.636 [2024-07-23 01:51:29.686667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.637 [2024-07-23 01:51:29.686699] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.637 [2024-07-23 01:51:29.686714] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.637 [2024-07-23 01:51:29.686727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.637 [2024-07-23 01:51:29.686756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.637 qpair failed and we were unable to recover it. 00:30:16.637 [2024-07-23 01:51:29.696535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.637 [2024-07-23 01:51:29.696684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.637 [2024-07-23 01:51:29.696710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.637 [2024-07-23 01:51:29.696725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.637 [2024-07-23 01:51:29.696738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.637 [2024-07-23 01:51:29.696767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.637 qpair failed and we were unable to recover it. 00:30:16.637 [2024-07-23 01:51:29.706550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.637 [2024-07-23 01:51:29.706721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.637 [2024-07-23 01:51:29.706748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.637 [2024-07-23 01:51:29.706763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.637 [2024-07-23 01:51:29.706776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.637 [2024-07-23 01:51:29.706806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.637 qpair failed and we were unable to recover it. 00:30:16.637 [2024-07-23 01:51:29.716594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.637 [2024-07-23 01:51:29.716805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.637 [2024-07-23 01:51:29.716836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.637 [2024-07-23 01:51:29.716853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.637 [2024-07-23 01:51:29.716871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.637 [2024-07-23 01:51:29.716902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.637 qpair failed and we were unable to recover it. 00:30:16.637 [2024-07-23 01:51:29.726661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.637 [2024-07-23 01:51:29.726802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.637 [2024-07-23 01:51:29.726832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.637 [2024-07-23 01:51:29.726846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.637 [2024-07-23 01:51:29.726859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.637 [2024-07-23 01:51:29.726899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.637 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-23 01:51:29.736652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.895 [2024-07-23 01:51:29.736813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.895 [2024-07-23 01:51:29.736839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.895 [2024-07-23 01:51:29.736854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.895 [2024-07-23 01:51:29.736867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.895 [2024-07-23 01:51:29.736897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-23 01:51:29.746653] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.895 [2024-07-23 01:51:29.746804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.895 [2024-07-23 01:51:29.746829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.895 [2024-07-23 01:51:29.746844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.895 [2024-07-23 01:51:29.746857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.895 [2024-07-23 01:51:29.746898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-23 01:51:29.756727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.895 [2024-07-23 01:51:29.756886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.895 [2024-07-23 01:51:29.756913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.895 [2024-07-23 01:51:29.756927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.895 [2024-07-23 01:51:29.756940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.895 [2024-07-23 01:51:29.756970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-23 01:51:29.766772] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.895 [2024-07-23 01:51:29.766923] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.895 [2024-07-23 01:51:29.766949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.895 [2024-07-23 01:51:29.766964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.895 [2024-07-23 01:51:29.766976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.895 [2024-07-23 01:51:29.767006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.895 qpair failed and we were unable to recover it. 00:30:16.895 [2024-07-23 01:51:29.776791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.895 [2024-07-23 01:51:29.776938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.895 [2024-07-23 01:51:29.776971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.895 [2024-07-23 01:51:29.776986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.776999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.777028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.786881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.787030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.787057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.787072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.787085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.787116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.796902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.797045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.797072] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.797088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.797101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.797131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.806941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.807090] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.807118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.807134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.807152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.807197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.816953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.817095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.817122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.817138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.817157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.817200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.826972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.827119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.827146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.827162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.827175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.827220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.836927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.837075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.837101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.837116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.837130] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.837160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.846986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.847128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.847154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.847169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.847182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.847212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.856985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.857123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.857149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.857163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.857176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.857205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.867000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.867146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.867172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.867187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.867200] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.867230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.877069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.877208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.877234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.877248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.877261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.877290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.887058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.887199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.887225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.887239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.887251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.887281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.897104] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.897244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.897269] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.897283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.897296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.897326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.907180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.907333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.896 [2024-07-23 01:51:29.907359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.896 [2024-07-23 01:51:29.907374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.896 [2024-07-23 01:51:29.907407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.896 [2024-07-23 01:51:29.907448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.896 qpair failed and we were unable to recover it. 00:30:16.896 [2024-07-23 01:51:29.917169] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.896 [2024-07-23 01:51:29.917331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.917357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.917372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.917385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.917426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.927230] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.927371] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.927396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.927411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.927424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.927453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.937211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.937352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.937379] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.937393] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.937406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.937436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.947226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.947367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.947392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.947407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.947420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.947449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.957248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.957384] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.957411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.957425] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.957438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.957468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.967300] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.967491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.967516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.967531] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.967543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.967573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.977322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.977463] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.977489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.977503] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.977516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.977561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:16.897 [2024-07-23 01:51:29.987342] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.897 [2024-07-23 01:51:29.987494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.897 [2024-07-23 01:51:29.987520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.897 [2024-07-23 01:51:29.987535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.897 [2024-07-23 01:51:29.987548] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:16.897 [2024-07-23 01:51:29.987593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:16.897 qpair failed and we were unable to recover it. 00:30:17.155 [2024-07-23 01:51:29.997391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:29.997585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:29.997611] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:29.997641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:29.997655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:29.997685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.007566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.007753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.007783] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.007799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.007812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.007847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.017452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.017592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.017628] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.017644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.017658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.017688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.027464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.027602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.027636] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.027652] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.027665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.027695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.037569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.037724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.037750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.037764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.037777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.037807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.047678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.047828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.047858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.047874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.047902] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.047932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.057593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.057777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.057804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.057818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.057831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.057860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.067625] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.067800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.067825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.067840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.067852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.067881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.077656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.077797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.077822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.077836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.077849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.077879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.087680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.087822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.087848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.087868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.087882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.087923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.097683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.097826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.097852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.097866] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.097879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.097909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.107752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.107912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.107937] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.107952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.107965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.107994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.156 [2024-07-23 01:51:30.117737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.156 [2024-07-23 01:51:30.117882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.156 [2024-07-23 01:51:30.117908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.156 [2024-07-23 01:51:30.117923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.156 [2024-07-23 01:51:30.117935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.156 [2024-07-23 01:51:30.117964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.156 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.127803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.127981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.128007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.128023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.128039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.128071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.137827] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.137969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.137995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.138010] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.138023] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.138053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.147983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.148147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.148172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.148187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.148215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.148245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.157923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.158058] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.158085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.158099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.158112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.158142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.167933] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.168085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.168111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.168126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.168138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.168168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.177939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.178079] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.178110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.178126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.178139] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.178181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.188074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.188234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.188259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.188274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.188301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.188330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.198006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.198182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.198208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.198222] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.198235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.198275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.208087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.208247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.208273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.208288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.208317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.208371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.218060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.218201] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.218226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.218241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.218254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.218291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.228103] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.228257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.228283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.228298] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.228310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.228340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.238113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.238255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.238281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.238295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.238309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.157 [2024-07-23 01:51:30.238350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.157 qpair failed and we were unable to recover it. 00:30:17.157 [2024-07-23 01:51:30.248204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.157 [2024-07-23 01:51:30.248364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.157 [2024-07-23 01:51:30.248389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.157 [2024-07-23 01:51:30.248419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.157 [2024-07-23 01:51:30.248431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.158 [2024-07-23 01:51:30.248475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.158 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.258178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.416 [2024-07-23 01:51:30.258335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.416 [2024-07-23 01:51:30.258361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.416 [2024-07-23 01:51:30.258375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.416 [2024-07-23 01:51:30.258388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.416 [2024-07-23 01:51:30.258417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.416 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.268212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.416 [2024-07-23 01:51:30.268351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.416 [2024-07-23 01:51:30.268382] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.416 [2024-07-23 01:51:30.268398] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.416 [2024-07-23 01:51:30.268411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.416 [2024-07-23 01:51:30.268440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.416 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.278274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.416 [2024-07-23 01:51:30.278474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.416 [2024-07-23 01:51:30.278499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.416 [2024-07-23 01:51:30.278514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.416 [2024-07-23 01:51:30.278528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.416 [2024-07-23 01:51:30.278557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.416 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.288287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.416 [2024-07-23 01:51:30.288431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.416 [2024-07-23 01:51:30.288457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.416 [2024-07-23 01:51:30.288471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.416 [2024-07-23 01:51:30.288484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.416 [2024-07-23 01:51:30.288527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.416 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.298293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.416 [2024-07-23 01:51:30.298441] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.416 [2024-07-23 01:51:30.298468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.416 [2024-07-23 01:51:30.298483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.416 [2024-07-23 01:51:30.298495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.416 [2024-07-23 01:51:30.298540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.416 qpair failed and we were unable to recover it. 00:30:17.416 [2024-07-23 01:51:30.308353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.308490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.308515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.308529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.308542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.308577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.318391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.318531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.318558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.318573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.318586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.318624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.328375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.328520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.328546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.328561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.328575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.328605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.338425] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.338571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.338597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.338611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.338633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.338675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.348477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.348650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.348677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.348695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.348710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.348741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.358521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.358701] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.358733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.358748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.358762] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.358793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.368584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.368734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.368760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.368775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.368790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.368820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.378531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.378686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.378713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.378728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.378742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.378772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.388555] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.388702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.388729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.388744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.388757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.388787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.398595] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.398739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.398765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.398781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.398815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.398845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.408626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.408774] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.408800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.408815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.408829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.408871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.418735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.418879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.418905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.418919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.418932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.418977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.428697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.417 [2024-07-23 01:51:30.428881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.417 [2024-07-23 01:51:30.428907] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.417 [2024-07-23 01:51:30.428922] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.417 [2024-07-23 01:51:30.428937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.417 [2024-07-23 01:51:30.428966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.417 qpair failed and we were unable to recover it. 00:30:17.417 [2024-07-23 01:51:30.438728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.438866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.438892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.438907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.438921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.438951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.448781] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.448959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.448984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.448999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.449013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.449044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.458791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.458943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.458968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.458983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.458997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.459041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.468883] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.469021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.469046] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.469061] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.469075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.469104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.478833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.478976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.479002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.479016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.479033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.479063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.488854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.489002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.489028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.489049] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.489064] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.489107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.498865] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.499063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.499090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.499105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.499119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.499149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.418 [2024-07-23 01:51:30.508911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.418 [2024-07-23 01:51:30.509077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.418 [2024-07-23 01:51:30.509104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.418 [2024-07-23 01:51:30.509119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.418 [2024-07-23 01:51:30.509147] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.418 [2024-07-23 01:51:30.509177] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.418 qpair failed and we were unable to recover it. 00:30:17.676 [2024-07-23 01:51:30.518951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.676 [2024-07-23 01:51:30.519096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.676 [2024-07-23 01:51:30.519123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.519137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.519152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.519181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.528972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.529121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.529146] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.529161] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.529174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.529219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.538996] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.539182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.539208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.539223] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.539237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.539266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.549025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.549169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.549195] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.549210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.549222] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.549265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.559079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.559228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.559255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.559269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.559283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.559314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.569157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.569351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.569378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.569408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.569422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.569466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.579137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.579321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.579348] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.579385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.579399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.579444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.589118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.589260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.589285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.589300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.589314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.589344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.599161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.599306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.599332] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.599347] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.599361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.599391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.609221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.609392] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.609417] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.609433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.609447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.609477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.619210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.619358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.619383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.619397] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.619410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.619438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.629264] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.629404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.629430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.629445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.629458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.629488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.639322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.677 [2024-07-23 01:51:30.639514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.677 [2024-07-23 01:51:30.639539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.677 [2024-07-23 01:51:30.639554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.677 [2024-07-23 01:51:30.639568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.677 [2024-07-23 01:51:30.639597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.677 qpair failed and we were unable to recover it. 00:30:17.677 [2024-07-23 01:51:30.649326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.649484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.649509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.649524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.649538] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.649567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.659348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.659499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.659524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.659539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.659553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.659597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.669394] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.669541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.669571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.669587] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.669601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.669650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.679398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.679557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.679582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.679597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.679611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.679650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.689451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.689652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.689678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.689693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.689707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.689737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.699476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.699650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.699679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.699695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.699709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.699740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.709526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.709723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.709749] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.709764] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.709778] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.709814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.719504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.719650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.719677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.719691] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.719704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.719735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.729542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.729697] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.729723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.729738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.729752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.729781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.739611] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.739766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.739791] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.739805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.739819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.739849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.749603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.749755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.749781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.749796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.749810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.749840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.759671] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.759822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.759852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.759868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.678 [2024-07-23 01:51:30.759882] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.678 [2024-07-23 01:51:30.759912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.678 qpair failed and we were unable to recover it. 00:30:17.678 [2024-07-23 01:51:30.769769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.678 [2024-07-23 01:51:30.769917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.678 [2024-07-23 01:51:30.769943] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.678 [2024-07-23 01:51:30.769958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.679 [2024-07-23 01:51:30.769988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.679 [2024-07-23 01:51:30.770017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.679 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.779702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.779887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.779924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.937 [2024-07-23 01:51:30.779940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.937 [2024-07-23 01:51:30.779953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.937 [2024-07-23 01:51:30.779984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.789726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.789867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.789894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.937 [2024-07-23 01:51:30.789908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.937 [2024-07-23 01:51:30.789922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.937 [2024-07-23 01:51:30.789951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.799789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.799933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.799964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.937 [2024-07-23 01:51:30.799981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.937 [2024-07-23 01:51:30.799995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.937 [2024-07-23 01:51:30.800032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.809808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.809952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.809979] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.937 [2024-07-23 01:51:30.809994] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.937 [2024-07-23 01:51:30.810007] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.937 [2024-07-23 01:51:30.810037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.819876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.820018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.820045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.937 [2024-07-23 01:51:30.820059] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.937 [2024-07-23 01:51:30.820072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.937 [2024-07-23 01:51:30.820101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.937 qpair failed and we were unable to recover it. 00:30:17.937 [2024-07-23 01:51:30.829867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.937 [2024-07-23 01:51:30.830013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.937 [2024-07-23 01:51:30.830039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.830054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.830068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.830097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.839909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.840056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.840085] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.840101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.840114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.840146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.849912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.850055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.850086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.850101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.850114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.850147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.859945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.860093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.860118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.860133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.860146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.860176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.869950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.870088] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.870114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.870131] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.870144] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.870174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.879998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.880175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.880200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.880214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.880227] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.880257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.890066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.890227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.890252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.890266] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.890287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.890318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.900117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.900301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.900326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.900340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.900353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.900382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.910078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.910214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.910240] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.910254] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.910267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.910307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.920149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.920294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.920320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.920333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.920346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.920375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.930179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.930320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.930345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.930358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.930372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.930402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.940167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.940311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.940336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.940350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.940363] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.940392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.950183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.938 [2024-07-23 01:51:30.950323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.938 [2024-07-23 01:51:30.950349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.938 [2024-07-23 01:51:30.950363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.938 [2024-07-23 01:51:30.950375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.938 [2024-07-23 01:51:30.950405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.938 qpair failed and we were unable to recover it. 00:30:17.938 [2024-07-23 01:51:30.960251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:30.960399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:30.960427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:30.960441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:30.960454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:30.960483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:30.970272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:30.970411] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:30.970437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:30.970451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:30.970464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:30.970494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:30.980293] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:30.980440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:30.980466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:30.980481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:30.980499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:30.980528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:30.990327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:30.990487] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:30.990512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:30.990526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:30.990539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:30.990567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:31.000363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:31.000525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:31.000551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:31.000565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:31.000578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:31.000606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:31.010384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:31.010538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:31.010564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:31.010578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:31.010590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:31.010626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:31.020441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:31.020634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:31.020660] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:31.020674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:31.020687] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:31.020715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:17.939 [2024-07-23 01:51:31.030471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:17.939 [2024-07-23 01:51:31.030655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:17.939 [2024-07-23 01:51:31.030681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:17.939 [2024-07-23 01:51:31.030695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:17.939 [2024-07-23 01:51:31.030708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:17.939 [2024-07-23 01:51:31.030737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:17.939 qpair failed and we were unable to recover it. 00:30:18.197 [2024-07-23 01:51:31.040512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.197 [2024-07-23 01:51:31.040667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.197 [2024-07-23 01:51:31.040693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.197 [2024-07-23 01:51:31.040712] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.197 [2024-07-23 01:51:31.040726] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.197 [2024-07-23 01:51:31.040757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.197 qpair failed and we were unable to recover it. 00:30:18.197 [2024-07-23 01:51:31.050513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.050660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.050687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.050701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.050714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.050755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.060533] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.060682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.060708] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.060722] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.060735] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.060764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.070579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.070724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.070750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.070770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.070784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.070813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.080646] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.080785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.080811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.080825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.080838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.080879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.090635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.090815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.090840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.090855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.090867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.090897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.100648] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.100786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.100811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.100825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.100838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.100868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.110705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.110848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.110874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.110888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.110901] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.110941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.120705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.120844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.120869] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.120883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.120896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.120924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.130776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.130916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.130941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.130954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.130967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.130997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.140770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.140910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.140935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.140950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.140962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.140993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.150805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.150945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.150970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.150984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.150997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.151038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.160826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.161012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.161037] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.161057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.161071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.198 [2024-07-23 01:51:31.161100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.198 qpair failed and we were unable to recover it. 00:30:18.198 [2024-07-23 01:51:31.170877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.198 [2024-07-23 01:51:31.171020] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.198 [2024-07-23 01:51:31.171045] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.198 [2024-07-23 01:51:31.171060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.198 [2024-07-23 01:51:31.171072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.171101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.180904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.181045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.181070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.181084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.181098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.181127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.190909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.191045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.191070] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.191084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.191097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.191125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.200947] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.201085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.201110] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.201124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.201137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.201167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.211078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.211221] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.211246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.211260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.211273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.211302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.221049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.221191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.221216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.221230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.221243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.221272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.231071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.231234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.231259] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.231273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.231286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.231313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.241061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.241211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.241236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.241250] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.241263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.241292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.251149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.251295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.251324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.251339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.251352] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.251380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.261133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.261273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.261298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.261312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.261324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.261354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.271190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.271335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.271361] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.271375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.271388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.271417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.281181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.281322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.281347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.281361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.281373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.281402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.199 [2024-07-23 01:51:31.291215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.199 [2024-07-23 01:51:31.291372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.199 [2024-07-23 01:51:31.291398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.199 [2024-07-23 01:51:31.291411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.199 [2024-07-23 01:51:31.291424] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.199 [2024-07-23 01:51:31.291459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.199 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.301229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.301400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.301425] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.301439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.301452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.301480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.311275] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.311417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.311442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.311456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.311472] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.311502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.321295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.321431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.321457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.321471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.321483] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.321512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.331338] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.331506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.331533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.331547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.331563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.331595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.341351] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.341490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.341521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.341536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.341549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.341577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.351397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.351563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.351589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.351603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.351625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.351660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.361436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.361578] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.361603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.361627] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.361642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.361670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.371458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.371602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.371634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.371649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.371662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.371702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.458 [2024-07-23 01:51:31.381464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.458 [2024-07-23 01:51:31.381606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.458 [2024-07-23 01:51:31.381637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.458 [2024-07-23 01:51:31.381651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.458 [2024-07-23 01:51:31.381670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.458 [2024-07-23 01:51:31.381702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.458 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.391525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.391676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.391702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.391715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.391728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.391759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.401512] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.401651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.401676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.401690] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.401703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.401732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.411570] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.411721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.411747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.411762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.411774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.411804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.421594] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.421744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.421771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.421786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.421798] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.421840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.431660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.431860] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.431889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.431904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.431917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.431947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.441655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.441841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.441868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.441886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.441899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.441928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.451696] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.451849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.451876] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.451891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.451903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.451932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.461722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.461866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.461893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.461907] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.461935] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.461964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.471718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.471855] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.471882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.471897] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.471916] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.471948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.481892] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.482075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.482104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.482122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.482136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.482166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.491830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.491975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.492001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.492016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.492046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.492076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.501808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.501955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.501981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.501995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.459 [2024-07-23 01:51:31.502009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.459 [2024-07-23 01:51:31.502038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.459 qpair failed and we were unable to recover it. 00:30:18.459 [2024-07-23 01:51:31.511844] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.459 [2024-07-23 01:51:31.511987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.459 [2024-07-23 01:51:31.512013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.459 [2024-07-23 01:51:31.512031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.460 [2024-07-23 01:51:31.512045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.460 [2024-07-23 01:51:31.512089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.460 qpair failed and we were unable to recover it. 00:30:18.460 [2024-07-23 01:51:31.521872] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.460 [2024-07-23 01:51:31.522012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.460 [2024-07-23 01:51:31.522039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.460 [2024-07-23 01:51:31.522055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.460 [2024-07-23 01:51:31.522068] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.460 [2024-07-23 01:51:31.522097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.460 qpair failed and we were unable to recover it. 00:30:18.460 [2024-07-23 01:51:31.531909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.460 [2024-07-23 01:51:31.532053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.460 [2024-07-23 01:51:31.532079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.460 [2024-07-23 01:51:31.532094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.460 [2024-07-23 01:51:31.532107] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.460 [2024-07-23 01:51:31.532152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.460 qpair failed and we were unable to recover it. 00:30:18.460 [2024-07-23 01:51:31.541911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.460 [2024-07-23 01:51:31.542054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.460 [2024-07-23 01:51:31.542082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.460 [2024-07-23 01:51:31.542096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.460 [2024-07-23 01:51:31.542108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.460 [2024-07-23 01:51:31.542137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.460 qpair failed and we were unable to recover it. 00:30:18.460 [2024-07-23 01:51:31.551964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.460 [2024-07-23 01:51:31.552116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.460 [2024-07-23 01:51:31.552142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.460 [2024-07-23 01:51:31.552157] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.460 [2024-07-23 01:51:31.552170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.460 [2024-07-23 01:51:31.552199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.460 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.561977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.562117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.562144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.562165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.562179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.562209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.572008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.572153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.572178] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.572193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.572207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.572253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.582033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.582172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.582198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.582212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.582225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.582256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.592063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.592202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.592229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.592244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.592257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.592286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.602073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.602209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.602236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.602252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.602265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.602294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.612205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.718 [2024-07-23 01:51:31.612348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.718 [2024-07-23 01:51:31.612375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.718 [2024-07-23 01:51:31.612390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.718 [2024-07-23 01:51:31.612403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.718 [2024-07-23 01:51:31.612432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.718 qpair failed and we were unable to recover it. 00:30:18.718 [2024-07-23 01:51:31.622163] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.622337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.622362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.622376] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.622388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.622417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.632203] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.632344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.632370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.632385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.632399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.632429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.642247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.642393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.642422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.642441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.642455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.642486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.652271] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.652420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.652446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.652471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.652502] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.652531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.662257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.662403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.662430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.662445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.662458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.662488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.672391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.672523] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.672550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.672565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.672579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.672608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.682336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.682474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.682499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.682514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.682528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.682557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.692372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.692513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.692539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.692555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.692568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.692610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.702393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.702531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.702557] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.702572] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.702586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.702623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.712440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.712580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.712606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.712629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.712643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.712680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.722436] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.722575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.722602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.722623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.722638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.722680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.732483] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.732629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.732656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.732670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.732684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.719 [2024-07-23 01:51:31.732713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.719 qpair failed and we were unable to recover it. 00:30:18.719 [2024-07-23 01:51:31.742509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.719 [2024-07-23 01:51:31.742659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.719 [2024-07-23 01:51:31.742690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.719 [2024-07-23 01:51:31.742706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.719 [2024-07-23 01:51:31.742720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.742749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.752540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.752690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.752716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.752732] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.752746] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.752776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.762577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.762729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.762755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.762769] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.762783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.762814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.772603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.772756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.772782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.772797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.772810] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.772840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.782608] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.782773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.782798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.782813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.782826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.782861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.792642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.792813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.792839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.792854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.792868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.792898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.802703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.802861] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.802886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.802901] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.802914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.802944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.720 [2024-07-23 01:51:31.812737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.720 [2024-07-23 01:51:31.812888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.720 [2024-07-23 01:51:31.812913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.720 [2024-07-23 01:51:31.812928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.720 [2024-07-23 01:51:31.812943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.720 [2024-07-23 01:51:31.812972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.720 qpair failed and we were unable to recover it. 00:30:18.978 [2024-07-23 01:51:31.822729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.978 [2024-07-23 01:51:31.822889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.978 [2024-07-23 01:51:31.822916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.978 [2024-07-23 01:51:31.822930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.978 [2024-07-23 01:51:31.822943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.822973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.832753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.832890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.832921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.832936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.832949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.832979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.842803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.842955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.842984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.842999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.843013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.843044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.852842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.852990] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.853016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.853031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.853063] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.853093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.862932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.863075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.863101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.863115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.863128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.863158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.872870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.873012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.873039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.873053] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.873067] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.873102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.882970] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.883112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.883138] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.883153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.883166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.883197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.892937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.893083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.893109] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.893124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.893138] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.893167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.902982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.903130] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.903156] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.903170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.903183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.903229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.912999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.913184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.913210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.913240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.913253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.913283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.923057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.923239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.923264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.923279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.923294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.923323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.933057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.933205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.933232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.933247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.933262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.933292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.943105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.979 [2024-07-23 01:51:31.943250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.979 [2024-07-23 01:51:31.943275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.979 [2024-07-23 01:51:31.943291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.979 [2024-07-23 01:51:31.943305] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.979 [2024-07-23 01:51:31.943335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.979 qpair failed and we were unable to recover it. 00:30:18.979 [2024-07-23 01:51:31.953114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:31.953257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:31.953282] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:31.953297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:31.953310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:31.953341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:31.963175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:31.963324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:31.963350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:31.963365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:31.963385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:31.963415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:31.973217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:31.973407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:31.973433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:31.973464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:31.973477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:31.973508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:31.983215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:31.983367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:31.983393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:31.983408] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:31.983422] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:31.983468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:31.993229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:31.993377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:31.993403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:31.993418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:31.993432] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:31.993474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.003252] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.003399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.003429] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.003444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.003458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.003488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.013289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.013439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.013466] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.013481] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.013495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.013537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.023314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.023518] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.023547] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.023561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.023575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.023606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.033378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.033522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.033548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.033563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.033577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.033607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.043378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.043516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.043542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.043557] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.043570] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.043601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.053397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.053541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.053567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.053586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.053600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.053638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.063462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.063605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.063637] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.980 [2024-07-23 01:51:32.063653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.980 [2024-07-23 01:51:32.063668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.980 [2024-07-23 01:51:32.063697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.980 qpair failed and we were unable to recover it. 00:30:18.980 [2024-07-23 01:51:32.073452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:18.980 [2024-07-23 01:51:32.073594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:18.980 [2024-07-23 01:51:32.073626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:18.981 [2024-07-23 01:51:32.073643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:18.981 [2024-07-23 01:51:32.073656] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:18.981 [2024-07-23 01:51:32.073687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:18.981 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.083494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.083641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.083668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.083682] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.083695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.083725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.093503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.093648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.093674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.093689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.093701] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.093732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.103561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.103725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.103752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.103766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.103780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.103810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.113567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.113716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.113742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.113756] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.113769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.113800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.123610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.123773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.123799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.123814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.123828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.123859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.133668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.133819] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.133844] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.133859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.133874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.133903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.143654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.143795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.143820] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.143840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.143855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.143884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.153700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.153842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.153868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.153882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.153895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.153940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.239 [2024-07-23 01:51:32.163724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.239 [2024-07-23 01:51:32.163862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.239 [2024-07-23 01:51:32.163887] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.239 [2024-07-23 01:51:32.163902] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.239 [2024-07-23 01:51:32.163915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.239 [2024-07-23 01:51:32.163957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.239 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.173749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.173895] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.173921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.173935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.173948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.173978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.183763] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.183925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.183951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.183966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.183979] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.184009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.193822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.193978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.194004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.194020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.194033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.194077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.203839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.203991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.204017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.204032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.204045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.204075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.213946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.214104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.214131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.214165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.214179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.214238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.223923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.224113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.224155] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.224173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.224187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.224231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.233927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.234069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.234101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.234116] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.234129] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.234160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.243977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.244117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.244143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.244158] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.244170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.244201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.254000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.254144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.254170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.254185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.254197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.254239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.264023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.264167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.264193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.264208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.264221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.264267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.274052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.274200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.274226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.274241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.274254] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.274290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.284082] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.284226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.284252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.240 [2024-07-23 01:51:32.284267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.240 [2024-07-23 01:51:32.284280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.240 [2024-07-23 01:51:32.284310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.240 qpair failed and we were unable to recover it. 00:30:19.240 [2024-07-23 01:51:32.294136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.240 [2024-07-23 01:51:32.294280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.240 [2024-07-23 01:51:32.294307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.241 [2024-07-23 01:51:32.294322] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.241 [2024-07-23 01:51:32.294336] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.241 [2024-07-23 01:51:32.294365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.241 qpair failed and we were unable to recover it. 00:30:19.241 [2024-07-23 01:51:32.304150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.241 [2024-07-23 01:51:32.304314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.241 [2024-07-23 01:51:32.304342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.241 [2024-07-23 01:51:32.304357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.241 [2024-07-23 01:51:32.304370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.241 [2024-07-23 01:51:32.304416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.241 qpair failed and we were unable to recover it. 00:30:19.241 [2024-07-23 01:51:32.314147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.241 [2024-07-23 01:51:32.314283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.241 [2024-07-23 01:51:32.314309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.241 [2024-07-23 01:51:32.314324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.241 [2024-07-23 01:51:32.314337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.241 [2024-07-23 01:51:32.314367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.241 qpair failed and we were unable to recover it. 00:30:19.241 [2024-07-23 01:51:32.324210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.241 [2024-07-23 01:51:32.324355] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.241 [2024-07-23 01:51:32.324386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.241 [2024-07-23 01:51:32.324402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.241 [2024-07-23 01:51:32.324415] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.241 [2024-07-23 01:51:32.324457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.241 qpair failed and we were unable to recover it. 00:30:19.241 [2024-07-23 01:51:32.334227] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.241 [2024-07-23 01:51:32.334373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.241 [2024-07-23 01:51:32.334401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.241 [2024-07-23 01:51:32.334419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.241 [2024-07-23 01:51:32.334448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.241 [2024-07-23 01:51:32.334489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.241 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.344248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.344395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.344422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.344437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.344449] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.344480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.354294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.354433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.354460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.354474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.354487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.354532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.364326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.364464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.364490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.364504] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.364517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.364553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.374362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.374505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.374531] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.374546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.374561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.374623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.384372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.384513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.384539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.384554] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.384566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.384597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.394378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.394519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.394546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.394561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.394574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.394604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.404426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.404585] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.404612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.404641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.404655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.404687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.414454] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.414595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.414639] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.414656] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.414670] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.414699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.424588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.424780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.424806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.424820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.424833] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.424864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.499 [2024-07-23 01:51:32.434539] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.499 [2024-07-23 01:51:32.434690] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.499 [2024-07-23 01:51:32.434717] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.499 [2024-07-23 01:51:32.434735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.499 [2024-07-23 01:51:32.434749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.499 [2024-07-23 01:51:32.434779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.499 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.444542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.444688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.444714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.444729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.444741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.444772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.454550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.454695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.454721] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.454735] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.454755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.454786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.464598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.464768] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.464794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.464809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.464822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.464852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.474621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.474810] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.474836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.474850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.474865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.474895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.484682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.484836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.484864] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.484882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.484897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.484927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.494689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.494835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.494861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.494875] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.494889] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.494920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.504715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.504899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.504940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.504955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.504968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.505013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.514732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.514868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.514894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.514909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.514922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.514952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.524803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.524982] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.525008] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.525023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.525037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.525068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.534815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.534961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.534987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.535002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.535017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.535058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.544851] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.544991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.545018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.545032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.545052] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.545083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.554878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.555060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.500 [2024-07-23 01:51:32.555086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.500 [2024-07-23 01:51:32.555101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.500 [2024-07-23 01:51:32.555114] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.500 [2024-07-23 01:51:32.555143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.500 qpair failed and we were unable to recover it. 00:30:19.500 [2024-07-23 01:51:32.564879] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.500 [2024-07-23 01:51:32.565017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-23 01:51:32.565043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-23 01:51:32.565058] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-23 01:51:32.565071] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.501 [2024-07-23 01:51:32.565101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-23 01:51:32.575005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-23 01:51:32.575157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-23 01:51:32.575183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-23 01:51:32.575198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-23 01:51:32.575210] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.501 [2024-07-23 01:51:32.575241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-23 01:51:32.584951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-23 01:51:32.585086] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-23 01:51:32.585113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-23 01:51:32.585127] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-23 01:51:32.585140] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.501 [2024-07-23 01:51:32.585169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.501 [2024-07-23 01:51:32.594961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.501 [2024-07-23 01:51:32.595097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.501 [2024-07-23 01:51:32.595124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.501 [2024-07-23 01:51:32.595138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.501 [2024-07-23 01:51:32.595152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.501 [2024-07-23 01:51:32.595181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.501 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.605000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.605159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.605185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.605200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.605214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.605244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.615047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.615207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.615233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.615248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.615261] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.615291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.625073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.625251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.625275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.625290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.625302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.625331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.635095] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.635229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.635255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.635275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.635290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.635319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.645107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.645239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.645264] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.645279] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.645291] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.645321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.655146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.655289] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.655315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.655330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.655342] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.655372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.665181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.665319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.665345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.665360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.665373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.759 [2024-07-23 01:51:32.665402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.759 qpair failed and we were unable to recover it. 00:30:19.759 [2024-07-23 01:51:32.675219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.759 [2024-07-23 01:51:32.675366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.759 [2024-07-23 01:51:32.675392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.759 [2024-07-23 01:51:32.675407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.759 [2024-07-23 01:51:32.675420] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.675465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.685243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.685379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.685404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.685419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.685433] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.685463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.695331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.695476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.695502] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.695516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.695529] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.695560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.705303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.705452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.705479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.705494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.705507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.705553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.715391] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.715530] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.715556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.715571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.715585] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.715633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.725363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.725561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.725592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.725609] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.725630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.725661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.735374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.735565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.735591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.735605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.735629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.735660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.745441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.745593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.745633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.745654] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.745669] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.745701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.755432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.755622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.755665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.755686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.755702] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.755733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.765493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.765677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.765706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.765724] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.765739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.765771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.775572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.775720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.775747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.775762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.775776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.775806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.785511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.785664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.785691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.785706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.785720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec8000b90 00:30:19.760 [2024-07-23 01:51:32.785750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.795550] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.795737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.795771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.760 [2024-07-23 01:51:32.795788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.760 [2024-07-23 01:51:32.795802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fceb8000b90 00:30:19.760 [2024-07-23 01:51:32.795835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.760 qpair failed and we were unable to recover it. 00:30:19.760 [2024-07-23 01:51:32.805620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.760 [2024-07-23 01:51:32.805766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.760 [2024-07-23 01:51:32.805795] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.805810] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.805823] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fceb8000b90 00:30:19.761 [2024-07-23 01:51:32.805866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.815705] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-07-23 01:51:32.815896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-07-23 01:51:32.815929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.815944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.815958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fceb8000b90 00:30:19.761 [2024-07-23 01:51:32.815988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.825657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-07-23 01:51:32.825812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-07-23 01:51:32.825845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.825861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.825875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec0000b90 00:30:19.761 [2024-07-23 01:51:32.825930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.835720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-07-23 01:51:32.835865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-07-23 01:51:32.835893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.835909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.835932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fcec0000b90 00:30:19.761 [2024-07-23 01:51:32.835977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.836253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91100 is same with the state(5) to be set 00:30:19.761 [2024-07-23 01:51:32.845743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-07-23 01:51:32.845888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-07-23 01:51:32.845922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.845938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.845950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf83610 00:30:19.761 [2024-07-23 01:51:32.845996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.855735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:19.761 [2024-07-23 01:51:32.855906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:19.761 [2024-07-23 01:51:32.855934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:19.761 [2024-07-23 01:51:32.855948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:19.761 [2024-07-23 01:51:32.855976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xf83610 00:30:19.761 [2024-07-23 01:51:32.856006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:19.761 qpair failed and we were unable to recover it. 00:30:19.761 [2024-07-23 01:51:32.856324] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf91100 (9): Bad file descriptor 00:30:20.019 Initializing NVMe Controllers 00:30:20.019 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:20.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:20.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:20.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:20.019 Initialization complete. Launching workers. 00:30:20.019 Starting thread on core 1 00:30:20.019 Starting thread on core 2 00:30:20.019 Starting thread on core 3 00:30:20.019 Starting thread on core 0 00:30:20.019 01:51:32 -- host/target_disconnect.sh@59 -- # sync 00:30:20.019 00:30:20.019 real 0m11.445s 00:30:20.019 user 0m20.244s 00:30:20.019 sys 0m5.441s 00:30:20.019 01:51:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:20.019 01:51:32 -- common/autotest_common.sh@10 -- # set +x 00:30:20.019 ************************************ 00:30:20.019 END TEST nvmf_target_disconnect_tc2 00:30:20.019 ************************************ 00:30:20.019 01:51:32 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:20.019 01:51:32 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:20.019 01:51:32 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:20.019 01:51:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:20.019 01:51:32 -- nvmf/common.sh@116 -- # sync 00:30:20.019 01:51:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:20.019 01:51:32 -- nvmf/common.sh@119 -- # set +e 00:30:20.019 01:51:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:20.019 01:51:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:20.019 rmmod nvme_tcp 00:30:20.019 rmmod nvme_fabrics 00:30:20.019 rmmod nvme_keyring 00:30:20.019 01:51:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:20.019 01:51:32 -- nvmf/common.sh@123 -- # set -e 00:30:20.019 01:51:32 -- nvmf/common.sh@124 -- # return 0 00:30:20.019 01:51:32 -- nvmf/common.sh@477 -- # '[' -n 3908465 ']' 00:30:20.019 01:51:32 -- nvmf/common.sh@478 -- # killprocess 3908465 00:30:20.019 01:51:32 -- common/autotest_common.sh@926 -- # '[' -z 3908465 ']' 00:30:20.019 01:51:32 -- common/autotest_common.sh@930 -- # kill -0 3908465 00:30:20.019 01:51:32 -- common/autotest_common.sh@931 -- # uname 00:30:20.019 01:51:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:20.019 01:51:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3908465 00:30:20.019 01:51:32 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:20.019 01:51:32 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:20.019 01:51:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3908465' 00:30:20.019 killing process with pid 3908465 00:30:20.019 01:51:32 -- common/autotest_common.sh@945 -- # kill 3908465 00:30:20.019 01:51:32 -- common/autotest_common.sh@950 -- # wait 3908465 00:30:20.279 01:51:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:20.279 01:51:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:20.279 01:51:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:20.279 01:51:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:20.279 01:51:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:20.279 01:51:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:20.279 01:51:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:20.279 01:51:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:22.181 01:51:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:22.182 00:30:22.182 real 0m15.962s 00:30:22.182 user 0m46.217s 00:30:22.182 sys 0m7.325s 00:30:22.182 01:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.182 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.182 ************************************ 00:30:22.182 END TEST nvmf_target_disconnect 00:30:22.182 ************************************ 00:30:22.440 01:51:35 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:22.440 01:51:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:22.440 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.440 01:51:35 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:22.440 00:30:22.440 real 22m23.108s 00:30:22.440 user 64m33.246s 00:30:22.440 sys 5m38.563s 00:30:22.440 01:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:22.440 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.440 ************************************ 00:30:22.440 END TEST nvmf_tcp 00:30:22.440 ************************************ 00:30:22.440 01:51:35 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:22.440 01:51:35 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:22.440 01:51:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:22.440 01:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:22.440 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.440 ************************************ 00:30:22.440 START TEST spdkcli_nvmf_tcp 00:30:22.440 ************************************ 00:30:22.440 01:51:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:22.440 * Looking for test storage... 00:30:22.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:22.440 01:51:35 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:22.440 01:51:35 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:22.440 01:51:35 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:22.440 01:51:35 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:22.440 01:51:35 -- nvmf/common.sh@7 -- # uname -s 00:30:22.440 01:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:22.440 01:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:22.440 01:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:22.440 01:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:22.440 01:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:22.440 01:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:22.440 01:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:22.440 01:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:22.440 01:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:22.440 01:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:22.440 01:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:22.440 01:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:22.440 01:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:22.440 01:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:22.440 01:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:22.440 01:51:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:22.440 01:51:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:22.440 01:51:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:22.440 01:51:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:22.440 01:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.440 01:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.441 01:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.441 01:51:35 -- paths/export.sh@5 -- # export PATH 00:30:22.441 01:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:22.441 01:51:35 -- nvmf/common.sh@46 -- # : 0 00:30:22.441 01:51:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:22.441 01:51:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:22.441 01:51:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:22.441 01:51:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:22.441 01:51:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:22.441 01:51:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:22.441 01:51:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:22.441 01:51:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:22.441 01:51:35 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:22.441 01:51:35 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:22.441 01:51:35 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:22.441 01:51:35 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:22.441 01:51:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:22.441 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.441 01:51:35 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:22.441 01:51:35 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3909608 00:30:22.441 01:51:35 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:22.441 01:51:35 -- spdkcli/common.sh@34 -- # waitforlisten 3909608 00:30:22.441 01:51:35 -- common/autotest_common.sh@819 -- # '[' -z 3909608 ']' 00:30:22.441 01:51:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.441 01:51:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:22.441 01:51:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.441 01:51:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:22.441 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:30:22.441 [2024-07-23 01:51:35.446137] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:22.441 [2024-07-23 01:51:35.446223] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3909608 ] 00:30:22.441 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.441 [2024-07-23 01:51:35.507927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:22.699 [2024-07-23 01:51:35.591554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:22.699 [2024-07-23 01:51:35.591762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:22.699 [2024-07-23 01:51:35.591766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.634 01:51:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:23.634 01:51:36 -- common/autotest_common.sh@852 -- # return 0 00:30:23.634 01:51:36 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:23.634 01:51:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:23.634 01:51:36 -- common/autotest_common.sh@10 -- # set +x 00:30:23.634 01:51:36 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:23.634 01:51:36 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:23.634 01:51:36 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:23.634 01:51:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:23.634 01:51:36 -- common/autotest_common.sh@10 -- # set +x 00:30:23.634 01:51:36 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:23.634 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:23.634 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:23.634 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:23.634 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:23.634 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:23.634 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:23.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:23.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:23.634 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:23.634 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:23.634 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:23.634 ' 00:30:23.892 [2024-07-23 01:51:36.793168] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:26.421 [2024-07-23 01:51:38.944050] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.355 [2024-07-23 01:51:40.184557] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:29.883 [2024-07-23 01:51:42.472027] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:31.781 [2024-07-23 01:51:44.442601] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:33.154 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:33.154 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:33.154 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:33.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:33.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:33.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:33.154 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:33.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:33.154 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:33.155 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:33.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:33.155 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:33.155 01:51:46 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:33.155 01:51:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:33.155 01:51:46 -- common/autotest_common.sh@10 -- # set +x 00:30:33.155 01:51:46 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:33.155 01:51:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:33.155 01:51:46 -- common/autotest_common.sh@10 -- # set +x 00:30:33.155 01:51:46 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:33.155 01:51:46 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:33.413 01:51:46 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:33.413 01:51:46 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:33.413 01:51:46 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:33.413 01:51:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:33.413 01:51:46 -- common/autotest_common.sh@10 -- # set +x 00:30:33.671 01:51:46 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:33.671 01:51:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:33.671 01:51:46 -- common/autotest_common.sh@10 -- # set +x 00:30:33.671 01:51:46 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:33.671 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:33.671 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:33.671 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:33.671 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:33.671 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:33.671 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:33.671 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:33.671 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:33.671 ' 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:38.931 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:38.931 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:38.931 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:38.931 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:38.932 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:38.932 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:38.932 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:38.932 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:38.932 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:38.932 01:51:51 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:38.932 01:51:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:38.932 01:51:51 -- common/autotest_common.sh@10 -- # set +x 00:30:38.932 01:51:51 -- spdkcli/nvmf.sh@90 -- # killprocess 3909608 00:30:38.932 01:51:51 -- common/autotest_common.sh@926 -- # '[' -z 3909608 ']' 00:30:38.932 01:51:51 -- common/autotest_common.sh@930 -- # kill -0 3909608 00:30:38.932 01:51:51 -- common/autotest_common.sh@931 -- # uname 00:30:38.932 01:51:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:38.932 01:51:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3909608 00:30:38.932 01:51:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:38.932 01:51:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:38.932 01:51:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3909608' 00:30:38.932 killing process with pid 3909608 00:30:38.932 01:51:51 -- common/autotest_common.sh@945 -- # kill 3909608 00:30:38.932 [2024-07-23 01:51:51.774746] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:38.932 01:51:51 -- common/autotest_common.sh@950 -- # wait 3909608 00:30:38.932 01:51:51 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:38.932 01:51:51 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:38.932 01:51:51 -- spdkcli/common.sh@13 -- # '[' -n 3909608 ']' 00:30:38.932 01:51:51 -- spdkcli/common.sh@14 -- # killprocess 3909608 00:30:38.932 01:51:51 -- common/autotest_common.sh@926 -- # '[' -z 3909608 ']' 00:30:38.932 01:51:51 -- common/autotest_common.sh@930 -- # kill -0 3909608 00:30:38.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3909608) - No such process 00:30:38.932 01:51:51 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3909608 is not found' 00:30:38.932 Process with pid 3909608 is not found 00:30:38.932 01:51:51 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:38.932 01:51:51 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:38.932 01:51:51 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:38.932 00:30:38.932 real 0m16.654s 00:30:38.932 user 0m35.230s 00:30:38.932 sys 0m0.856s 00:30:38.932 01:51:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:38.932 01:51:51 -- common/autotest_common.sh@10 -- # set +x 00:30:38.932 ************************************ 00:30:38.932 END TEST spdkcli_nvmf_tcp 00:30:38.932 ************************************ 00:30:38.932 01:51:52 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:38.932 01:51:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:38.932 01:51:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:38.932 01:51:52 -- common/autotest_common.sh@10 -- # set +x 00:30:38.932 ************************************ 00:30:38.932 START TEST nvmf_identify_passthru 00:30:38.932 ************************************ 00:30:38.932 01:51:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:39.190 * Looking for test storage... 00:30:39.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.190 01:51:52 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.190 01:51:52 -- nvmf/common.sh@7 -- # uname -s 00:30:39.190 01:51:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.190 01:51:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.190 01:51:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.190 01:51:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.190 01:51:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.190 01:51:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.190 01:51:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.190 01:51:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.190 01:51:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.190 01:51:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.190 01:51:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.190 01:51:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.190 01:51:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.190 01:51:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.190 01:51:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.190 01:51:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.190 01:51:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.190 01:51:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.190 01:51:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.190 01:51:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@5 -- # export PATH 00:30:39.190 01:51:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- nvmf/common.sh@46 -- # : 0 00:30:39.190 01:51:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:39.190 01:51:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:39.190 01:51:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:39.190 01:51:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.190 01:51:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.190 01:51:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:39.190 01:51:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:39.190 01:51:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:39.190 01:51:52 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.190 01:51:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.190 01:51:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.190 01:51:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.190 01:51:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- paths/export.sh@5 -- # export PATH 00:30:39.190 01:51:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.190 01:51:52 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:39.190 01:51:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:39.190 01:51:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.190 01:51:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:39.190 01:51:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:39.190 01:51:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:39.190 01:51:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.190 01:51:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.190 01:51:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.190 01:51:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:39.190 01:51:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:39.190 01:51:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:39.190 01:51:52 -- common/autotest_common.sh@10 -- # set +x 00:30:41.091 01:51:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:41.091 01:51:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:41.091 01:51:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:41.091 01:51:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:41.091 01:51:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:41.091 01:51:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:41.091 01:51:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:41.091 01:51:54 -- nvmf/common.sh@294 -- # net_devs=() 00:30:41.091 01:51:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:41.091 01:51:54 -- nvmf/common.sh@295 -- # e810=() 00:30:41.091 01:51:54 -- nvmf/common.sh@295 -- # local -ga e810 00:30:41.091 01:51:54 -- nvmf/common.sh@296 -- # x722=() 00:30:41.091 01:51:54 -- nvmf/common.sh@296 -- # local -ga x722 00:30:41.091 01:51:54 -- nvmf/common.sh@297 -- # mlx=() 00:30:41.091 01:51:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:41.091 01:51:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.091 01:51:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.091 01:51:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.091 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.091 01:51:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.091 01:51:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.091 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.091 01:51:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.091 01:51:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.091 01:51:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.091 01:51:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.091 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.091 01:51:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.091 01:51:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.091 01:51:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.091 01:51:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.091 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.091 01:51:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:41.091 01:51:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:41.091 01:51:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.091 01:51:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.091 01:51:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:41.091 01:51:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.091 01:51:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.091 01:51:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:41.091 01:51:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.091 01:51:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.091 01:51:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:41.091 01:51:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:41.091 01:51:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.091 01:51:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.091 01:51:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.091 01:51:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.091 01:51:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:41.091 01:51:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.091 01:51:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.091 01:51:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.091 01:51:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:41.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:41.091 00:30:41.091 --- 10.0.0.2 ping statistics --- 00:30:41.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.091 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:41.091 01:51:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:30:41.091 00:30:41.091 --- 10.0.0.1 ping statistics --- 00:30:41.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.091 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:41.091 01:51:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.091 01:51:54 -- nvmf/common.sh@410 -- # return 0 00:30:41.091 01:51:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:41.091 01:51:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.091 01:51:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:41.091 01:51:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.091 01:51:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:41.092 01:51:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:41.351 01:51:54 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:41.351 01:51:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:41.351 01:51:54 -- common/autotest_common.sh@10 -- # set +x 00:30:41.351 01:51:54 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:41.351 01:51:54 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:41.351 01:51:54 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:41.351 01:51:54 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:41.351 01:51:54 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:41.351 01:51:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:41.351 01:51:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:41.351 01:51:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:41.351 01:51:54 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:41.351 01:51:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:41.351 01:51:54 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:41.351 01:51:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:41.351 01:51:54 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:41.351 01:51:54 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:41.351 01:51:54 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:41.351 01:51:54 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:41.351 01:51:54 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:41.351 01:51:54 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:41.351 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.535 01:51:58 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:45.535 01:51:58 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:45.535 01:51:58 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:45.535 01:51:58 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:45.535 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.722 01:52:02 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:49.722 01:52:02 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:49.722 01:52:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:49.722 01:52:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.722 01:52:02 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:49.722 01:52:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:49.722 01:52:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.722 01:52:02 -- target/identify_passthru.sh@31 -- # nvmfpid=3914407 00:30:49.722 01:52:02 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:49.722 01:52:02 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:49.722 01:52:02 -- target/identify_passthru.sh@35 -- # waitforlisten 3914407 00:30:49.722 01:52:02 -- common/autotest_common.sh@819 -- # '[' -z 3914407 ']' 00:30:49.722 01:52:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.722 01:52:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:49.722 01:52:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.722 01:52:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:49.722 01:52:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.722 [2024-07-23 01:52:02.720801] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:49.722 [2024-07-23 01:52:02.720894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.722 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.722 [2024-07-23 01:52:02.786255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:49.980 [2024-07-23 01:52:02.871222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:49.980 [2024-07-23 01:52:02.871375] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.980 [2024-07-23 01:52:02.871391] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.980 [2024-07-23 01:52:02.871403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.980 [2024-07-23 01:52:02.871475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.980 [2024-07-23 01:52:02.871535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.980 [2024-07-23 01:52:02.871565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:49.980 [2024-07-23 01:52:02.871567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.980 01:52:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:49.980 01:52:02 -- common/autotest_common.sh@852 -- # return 0 00:30:49.980 01:52:02 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:49.980 01:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.980 01:52:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.980 INFO: Log level set to 20 00:30:49.980 INFO: Requests: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "method": "nvmf_set_config", 00:30:49.980 "id": 1, 00:30:49.980 "params": { 00:30:49.980 "admin_cmd_passthru": { 00:30:49.980 "identify_ctrlr": true 00:30:49.980 } 00:30:49.980 } 00:30:49.980 } 00:30:49.980 00:30:49.980 INFO: response: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "id": 1, 00:30:49.980 "result": true 00:30:49.980 } 00:30:49.980 00:30:49.980 01:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.980 01:52:02 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:49.980 01:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.980 01:52:02 -- common/autotest_common.sh@10 -- # set +x 00:30:49.980 INFO: Setting log level to 20 00:30:49.980 INFO: Setting log level to 20 00:30:49.980 INFO: Log level set to 20 00:30:49.980 INFO: Log level set to 20 00:30:49.980 INFO: Requests: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "method": "framework_start_init", 00:30:49.980 "id": 1 00:30:49.980 } 00:30:49.980 00:30:49.980 INFO: Requests: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "method": "framework_start_init", 00:30:49.980 "id": 1 00:30:49.980 } 00:30:49.980 00:30:49.980 [2024-07-23 01:52:03.049791] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:49.980 INFO: response: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "id": 1, 00:30:49.980 "result": true 00:30:49.980 } 00:30:49.980 00:30:49.980 INFO: response: 00:30:49.980 { 00:30:49.980 "jsonrpc": "2.0", 00:30:49.980 "id": 1, 00:30:49.980 "result": true 00:30:49.980 } 00:30:49.980 00:30:49.980 01:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.980 01:52:03 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.980 01:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.980 01:52:03 -- common/autotest_common.sh@10 -- # set +x 00:30:49.980 INFO: Setting log level to 40 00:30:49.980 INFO: Setting log level to 40 00:30:49.980 INFO: Setting log level to 40 00:30:49.980 [2024-07-23 01:52:03.059641] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.980 01:52:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.980 01:52:03 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:49.980 01:52:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:49.980 01:52:03 -- common/autotest_common.sh@10 -- # set +x 00:30:50.238 01:52:03 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:50.238 01:52:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:50.238 01:52:03 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 Nvme0n1 00:30:53.517 01:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:05 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:53.517 01:52:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.517 01:52:05 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 01:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:05 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:53.517 01:52:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.517 01:52:05 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 01:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:05 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.517 01:52:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.517 01:52:05 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 [2024-07-23 01:52:05.946735] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.517 01:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:05 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:53.517 01:52:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.517 01:52:05 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 [2024-07-23 01:52:05.954464] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:53.517 [ 00:30:53.517 { 00:30:53.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:53.517 "subtype": "Discovery", 00:30:53.517 "listen_addresses": [], 00:30:53.517 "allow_any_host": true, 00:30:53.517 "hosts": [] 00:30:53.517 }, 00:30:53.517 { 00:30:53.517 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:53.517 "subtype": "NVMe", 00:30:53.517 "listen_addresses": [ 00:30:53.517 { 00:30:53.517 "transport": "TCP", 00:30:53.517 "trtype": "TCP", 00:30:53.517 "adrfam": "IPv4", 00:30:53.517 "traddr": "10.0.0.2", 00:30:53.517 "trsvcid": "4420" 00:30:53.517 } 00:30:53.517 ], 00:30:53.517 "allow_any_host": true, 00:30:53.517 "hosts": [], 00:30:53.517 "serial_number": "SPDK00000000000001", 00:30:53.517 "model_number": "SPDK bdev Controller", 00:30:53.517 "max_namespaces": 1, 00:30:53.517 "min_cntlid": 1, 00:30:53.517 "max_cntlid": 65519, 00:30:53.517 "namespaces": [ 00:30:53.517 { 00:30:53.517 "nsid": 1, 00:30:53.517 "bdev_name": "Nvme0n1", 00:30:53.517 "name": "Nvme0n1", 00:30:53.517 "nguid": "7EBF07B02306448C8DB04AE1FB645922", 00:30:53.517 "uuid": "7ebf07b0-2306-448c-8db0-4ae1fb645922" 00:30:53.517 } 00:30:53.517 ] 00:30:53.517 } 00:30:53.517 ] 00:30:53.517 01:52:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:05 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:53.517 01:52:05 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:53.517 01:52:05 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:53.517 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.517 01:52:06 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:53.517 01:52:06 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:53.517 01:52:06 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:53.517 01:52:06 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:53.517 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.517 01:52:06 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:53.517 01:52:06 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:53.517 01:52:06 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:53.517 01:52:06 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:53.517 01:52:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:53.517 01:52:06 -- common/autotest_common.sh@10 -- # set +x 00:30:53.517 01:52:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:53.517 01:52:06 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:53.517 01:52:06 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:53.517 01:52:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:53.517 01:52:06 -- nvmf/common.sh@116 -- # sync 00:30:53.517 01:52:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:53.517 01:52:06 -- nvmf/common.sh@119 -- # set +e 00:30:53.517 01:52:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:53.518 01:52:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:53.518 rmmod nvme_tcp 00:30:53.518 rmmod nvme_fabrics 00:30:53.518 rmmod nvme_keyring 00:30:53.518 01:52:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:53.518 01:52:06 -- nvmf/common.sh@123 -- # set -e 00:30:53.518 01:52:06 -- nvmf/common.sh@124 -- # return 0 00:30:53.518 01:52:06 -- nvmf/common.sh@477 -- # '[' -n 3914407 ']' 00:30:53.518 01:52:06 -- nvmf/common.sh@478 -- # killprocess 3914407 00:30:53.518 01:52:06 -- common/autotest_common.sh@926 -- # '[' -z 3914407 ']' 00:30:53.518 01:52:06 -- common/autotest_common.sh@930 -- # kill -0 3914407 00:30:53.518 01:52:06 -- common/autotest_common.sh@931 -- # uname 00:30:53.518 01:52:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:53.518 01:52:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3914407 00:30:53.518 01:52:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:53.518 01:52:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:53.518 01:52:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3914407' 00:30:53.518 killing process with pid 3914407 00:30:53.518 01:52:06 -- common/autotest_common.sh@945 -- # kill 3914407 00:30:53.518 [2024-07-23 01:52:06.268846] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:53.518 01:52:06 -- common/autotest_common.sh@950 -- # wait 3914407 00:30:54.890 01:52:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:54.890 01:52:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:54.891 01:52:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:54.891 01:52:07 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:54.891 01:52:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:54.891 01:52:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.891 01:52:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:54.891 01:52:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:56.848 01:52:09 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:56.848 00:30:56.848 real 0m17.864s 00:30:56.848 user 0m26.232s 00:30:56.848 sys 0m2.305s 00:30:56.848 01:52:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:56.848 01:52:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.848 ************************************ 00:30:56.848 END TEST nvmf_identify_passthru 00:30:56.848 ************************************ 00:30:56.848 01:52:09 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:56.848 01:52:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:56.848 01:52:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:56.848 01:52:09 -- common/autotest_common.sh@10 -- # set +x 00:30:56.848 ************************************ 00:30:56.848 START TEST nvmf_dif 00:30:56.848 ************************************ 00:30:56.848 01:52:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:57.106 * Looking for test storage... 00:30:57.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.106 01:52:09 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.106 01:52:09 -- nvmf/common.sh@7 -- # uname -s 00:30:57.106 01:52:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.106 01:52:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.106 01:52:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.106 01:52:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.106 01:52:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.106 01:52:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.106 01:52:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.106 01:52:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.106 01:52:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.106 01:52:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.106 01:52:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.106 01:52:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:57.106 01:52:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.106 01:52:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.106 01:52:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.106 01:52:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.106 01:52:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.106 01:52:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.106 01:52:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.106 01:52:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.107 01:52:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.107 01:52:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.107 01:52:09 -- paths/export.sh@5 -- # export PATH 00:30:57.107 01:52:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.107 01:52:09 -- nvmf/common.sh@46 -- # : 0 00:30:57.107 01:52:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:57.107 01:52:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:57.107 01:52:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:57.107 01:52:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.107 01:52:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.107 01:52:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:57.107 01:52:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:57.107 01:52:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:57.107 01:52:09 -- target/dif.sh@15 -- # NULL_META=16 00:30:57.107 01:52:09 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:57.107 01:52:09 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:57.107 01:52:09 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:57.107 01:52:09 -- target/dif.sh@135 -- # nvmftestinit 00:30:57.107 01:52:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:57.107 01:52:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.107 01:52:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:57.107 01:52:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:57.107 01:52:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:57.107 01:52:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.107 01:52:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:57.107 01:52:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.107 01:52:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:57.107 01:52:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:57.107 01:52:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:57.107 01:52:09 -- common/autotest_common.sh@10 -- # set +x 00:30:59.010 01:52:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:59.010 01:52:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:59.010 01:52:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:59.010 01:52:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:59.010 01:52:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:59.010 01:52:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:59.010 01:52:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:59.010 01:52:12 -- nvmf/common.sh@294 -- # net_devs=() 00:30:59.010 01:52:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:59.010 01:52:12 -- nvmf/common.sh@295 -- # e810=() 00:30:59.010 01:52:12 -- nvmf/common.sh@295 -- # local -ga e810 00:30:59.010 01:52:12 -- nvmf/common.sh@296 -- # x722=() 00:30:59.010 01:52:12 -- nvmf/common.sh@296 -- # local -ga x722 00:30:59.010 01:52:12 -- nvmf/common.sh@297 -- # mlx=() 00:30:59.010 01:52:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:59.010 01:52:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.010 01:52:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:59.010 01:52:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:59.010 01:52:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:59.010 01:52:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:59.010 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:59.010 01:52:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:59.010 01:52:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:59.010 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:59.010 01:52:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:59.010 01:52:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.010 01:52:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.010 01:52:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:59.010 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:59.010 01:52:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.010 01:52:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:59.010 01:52:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.010 01:52:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.010 01:52:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:59.010 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:59.010 01:52:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.010 01:52:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:59.010 01:52:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:59.010 01:52:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:59.010 01:52:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.010 01:52:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.010 01:52:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.010 01:52:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:59.010 01:52:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.010 01:52:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.010 01:52:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:59.010 01:52:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.010 01:52:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.010 01:52:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:59.010 01:52:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:59.010 01:52:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.010 01:52:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.010 01:52:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.010 01:52:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.010 01:52:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:59.010 01:52:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.269 01:52:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.269 01:52:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.269 01:52:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:59.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:30:59.269 00:30:59.269 --- 10.0.0.2 ping statistics --- 00:30:59.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.269 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:30:59.269 01:52:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:30:59.269 00:30:59.269 --- 10.0.0.1 ping statistics --- 00:30:59.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.269 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:30:59.269 01:52:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.269 01:52:12 -- nvmf/common.sh@410 -- # return 0 00:30:59.269 01:52:12 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:59.269 01:52:12 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:00.205 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:00.205 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:31:00.205 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:00.205 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:00.205 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:00.205 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:00.205 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:00.205 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:00.205 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:00.205 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:31:00.205 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:31:00.205 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:31:00.205 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:31:00.205 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:31:00.205 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:31:00.205 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:31:00.205 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:31:00.465 01:52:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.465 01:52:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:00.465 01:52:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:00.465 01:52:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.465 01:52:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:00.465 01:52:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:00.465 01:52:13 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:00.465 01:52:13 -- target/dif.sh@137 -- # nvmfappstart 00:31:00.466 01:52:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:00.466 01:52:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:00.466 01:52:13 -- common/autotest_common.sh@10 -- # set +x 00:31:00.466 01:52:13 -- nvmf/common.sh@469 -- # nvmfpid=3917604 00:31:00.466 01:52:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:00.466 01:52:13 -- nvmf/common.sh@470 -- # waitforlisten 3917604 00:31:00.466 01:52:13 -- common/autotest_common.sh@819 -- # '[' -z 3917604 ']' 00:31:00.466 01:52:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.466 01:52:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:00.466 01:52:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.466 01:52:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:00.466 01:52:13 -- common/autotest_common.sh@10 -- # set +x 00:31:00.466 [2024-07-23 01:52:13.532505] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:00.466 [2024-07-23 01:52:13.532580] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.724 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.724 [2024-07-23 01:52:13.602143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.724 [2024-07-23 01:52:13.694323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:00.724 [2024-07-23 01:52:13.694493] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.724 [2024-07-23 01:52:13.694513] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.724 [2024-07-23 01:52:13.694529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.724 [2024-07-23 01:52:13.694572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.657 01:52:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:01.657 01:52:14 -- common/autotest_common.sh@852 -- # return 0 00:31:01.657 01:52:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:01.657 01:52:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 01:52:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:01.657 01:52:14 -- target/dif.sh@139 -- # create_transport 00:31:01.657 01:52:14 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:01.657 01:52:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 [2024-07-23 01:52:14.511507] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.657 01:52:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.657 01:52:14 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:01.657 01:52:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:01.657 01:52:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 ************************************ 00:31:01.657 START TEST fio_dif_1_default 00:31:01.657 ************************************ 00:31:01.657 01:52:14 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:31:01.657 01:52:14 -- target/dif.sh@86 -- # create_subsystems 0 00:31:01.657 01:52:14 -- target/dif.sh@28 -- # local sub 00:31:01.657 01:52:14 -- target/dif.sh@30 -- # for sub in "$@" 00:31:01.657 01:52:14 -- target/dif.sh@31 -- # create_subsystem 0 00:31:01.657 01:52:14 -- target/dif.sh@18 -- # local sub_id=0 00:31:01.657 01:52:14 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:01.657 01:52:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 bdev_null0 00:31:01.657 01:52:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.657 01:52:14 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:01.657 01:52:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 01:52:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.657 01:52:14 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:01.657 01:52:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 01:52:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.657 01:52:14 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:01.657 01:52:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:01.657 01:52:14 -- common/autotest_common.sh@10 -- # set +x 00:31:01.657 [2024-07-23 01:52:14.547750] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.657 01:52:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:01.657 01:52:14 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:01.657 01:52:14 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:01.657 01:52:14 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:01.657 01:52:14 -- nvmf/common.sh@520 -- # config=() 00:31:01.657 01:52:14 -- nvmf/common.sh@520 -- # local subsystem config 00:31:01.657 01:52:14 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:01.657 01:52:14 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.657 01:52:14 -- target/dif.sh@82 -- # gen_fio_conf 00:31:01.657 01:52:14 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:01.657 { 00:31:01.657 "params": { 00:31:01.657 "name": "Nvme$subsystem", 00:31:01.657 "trtype": "$TEST_TRANSPORT", 00:31:01.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:01.657 "adrfam": "ipv4", 00:31:01.657 "trsvcid": "$NVMF_PORT", 00:31:01.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:01.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:01.657 "hdgst": ${hdgst:-false}, 00:31:01.657 "ddgst": ${ddgst:-false} 00:31:01.657 }, 00:31:01.657 "method": "bdev_nvme_attach_controller" 00:31:01.657 } 00:31:01.657 EOF 00:31:01.657 )") 00:31:01.657 01:52:14 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.657 01:52:14 -- target/dif.sh@54 -- # local file 00:31:01.657 01:52:14 -- target/dif.sh@56 -- # cat 00:31:01.657 01:52:14 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:01.657 01:52:14 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:01.657 01:52:14 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:01.657 01:52:14 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.657 01:52:14 -- common/autotest_common.sh@1320 -- # shift 00:31:01.657 01:52:14 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:01.657 01:52:14 -- nvmf/common.sh@542 -- # cat 00:31:01.657 01:52:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.658 01:52:14 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:01.658 01:52:14 -- target/dif.sh@72 -- # (( file <= files )) 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:01.658 01:52:14 -- nvmf/common.sh@544 -- # jq . 00:31:01.658 01:52:14 -- nvmf/common.sh@545 -- # IFS=, 00:31:01.658 01:52:14 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:01.658 "params": { 00:31:01.658 "name": "Nvme0", 00:31:01.658 "trtype": "tcp", 00:31:01.658 "traddr": "10.0.0.2", 00:31:01.658 "adrfam": "ipv4", 00:31:01.658 "trsvcid": "4420", 00:31:01.658 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:01.658 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:01.658 "hdgst": false, 00:31:01.658 "ddgst": false 00:31:01.658 }, 00:31:01.658 "method": "bdev_nvme_attach_controller" 00:31:01.658 }' 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:01.658 01:52:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:01.658 01:52:14 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:01.658 01:52:14 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:01.658 01:52:14 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:01.658 01:52:14 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:01.658 01:52:14 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.916 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:01.916 fio-3.35 00:31:01.916 Starting 1 thread 00:31:01.916 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.482 [2024-07-23 01:52:15.368394] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:02.482 [2024-07-23 01:52:15.368446] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:12.444 00:31:12.444 filename0: (groupid=0, jobs=1): err= 0: pid=3917969: Tue Jul 23 01:52:25 2024 00:31:12.444 read: IOPS=189, BW=759KiB/s (777kB/s)(7600KiB/10014msec) 00:31:12.444 slat (nsec): min=4360, max=76685, avg=9210.32, stdev=2872.65 00:31:12.444 clat (usec): min=802, max=46989, avg=21051.39, stdev=20137.54 00:31:12.444 lat (usec): min=810, max=47015, avg=21060.60, stdev=20137.42 00:31:12.444 clat percentiles (usec): 00:31:12.444 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 848], 20.00th=[ 865], 00:31:12.444 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[41157], 60.00th=[41157], 00:31:12.444 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:12.444 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:31:12.444 | 99.99th=[46924] 00:31:12.444 bw ( KiB/s): min= 704, max= 768, per=99.88%, avg=758.40, stdev=21.02, samples=20 00:31:12.444 iops : min= 176, max= 192, avg=189.60, stdev= 5.26, samples=20 00:31:12.444 lat (usec) : 1000=49.47% 00:31:12.444 lat (msec) : 2=0.42%, 50=50.11% 00:31:12.444 cpu : usr=90.19%, sys=9.46%, ctx=14, majf=0, minf=291 00:31:12.444 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.444 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.444 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.444 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:12.444 00:31:12.444 Run status group 0 (all jobs): 00:31:12.444 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7600KiB (7782kB), run=10014-10014msec 00:31:12.702 01:52:25 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:12.702 01:52:25 -- target/dif.sh@43 -- # local sub 00:31:12.702 01:52:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.702 01:52:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:12.702 01:52:25 -- target/dif.sh@36 -- # local sub_id=0 00:31:12.702 01:52:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.702 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.702 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.702 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.702 01:52:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:12.702 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.702 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.702 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.702 00:31:12.702 real 0m11.181s 00:31:12.702 user 0m10.192s 00:31:12.702 sys 0m1.245s 00:31:12.702 01:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:12.702 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.702 ************************************ 00:31:12.702 END TEST fio_dif_1_default 00:31:12.702 ************************************ 00:31:12.702 01:52:25 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:12.702 01:52:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:12.702 01:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:12.702 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.702 ************************************ 00:31:12.702 START TEST fio_dif_1_multi_subsystems 00:31:12.702 ************************************ 00:31:12.702 01:52:25 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:12.702 01:52:25 -- target/dif.sh@92 -- # local files=1 00:31:12.702 01:52:25 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:12.702 01:52:25 -- target/dif.sh@28 -- # local sub 00:31:12.702 01:52:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.702 01:52:25 -- target/dif.sh@31 -- # create_subsystem 0 00:31:12.702 01:52:25 -- target/dif.sh@18 -- # local sub_id=0 00:31:12.702 01:52:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:12.702 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 bdev_null0 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 [2024-07-23 01:52:25.752707] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.703 01:52:25 -- target/dif.sh@31 -- # create_subsystem 1 00:31:12.703 01:52:25 -- target/dif.sh@18 -- # local sub_id=1 00:31:12.703 01:52:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 bdev_null1 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.703 01:52:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.703 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:31:12.703 01:52:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.703 01:52:25 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:12.703 01:52:25 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:12.703 01:52:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:12.703 01:52:25 -- nvmf/common.sh@520 -- # config=() 00:31:12.703 01:52:25 -- nvmf/common.sh@520 -- # local subsystem config 00:31:12.703 01:52:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:12.703 01:52:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.703 01:52:25 -- target/dif.sh@82 -- # gen_fio_conf 00:31:12.703 01:52:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:12.703 { 00:31:12.703 "params": { 00:31:12.703 "name": "Nvme$subsystem", 00:31:12.703 "trtype": "$TEST_TRANSPORT", 00:31:12.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.703 "adrfam": "ipv4", 00:31:12.703 "trsvcid": "$NVMF_PORT", 00:31:12.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.703 "hdgst": ${hdgst:-false}, 00:31:12.703 "ddgst": ${ddgst:-false} 00:31:12.703 }, 00:31:12.703 "method": "bdev_nvme_attach_controller" 00:31:12.703 } 00:31:12.703 EOF 00:31:12.703 )") 00:31:12.703 01:52:25 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.703 01:52:25 -- target/dif.sh@54 -- # local file 00:31:12.703 01:52:25 -- target/dif.sh@56 -- # cat 00:31:12.703 01:52:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:12.703 01:52:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.703 01:52:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:12.703 01:52:25 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.703 01:52:25 -- common/autotest_common.sh@1320 -- # shift 00:31:12.703 01:52:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:12.703 01:52:25 -- nvmf/common.sh@542 -- # cat 00:31:12.703 01:52:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.703 01:52:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:12.703 01:52:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.703 01:52:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.703 01:52:25 -- target/dif.sh@73 -- # cat 00:31:12.703 01:52:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:12.703 01:52:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:12.703 01:52:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:12.703 01:52:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:12.703 { 00:31:12.703 "params": { 00:31:12.703 "name": "Nvme$subsystem", 00:31:12.703 "trtype": "$TEST_TRANSPORT", 00:31:12.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.703 "adrfam": "ipv4", 00:31:12.703 "trsvcid": "$NVMF_PORT", 00:31:12.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.703 "hdgst": ${hdgst:-false}, 00:31:12.703 "ddgst": ${ddgst:-false} 00:31:12.703 }, 00:31:12.703 "method": "bdev_nvme_attach_controller" 00:31:12.703 } 00:31:12.703 EOF 00:31:12.703 )") 00:31:12.703 01:52:25 -- nvmf/common.sh@542 -- # cat 00:31:12.703 01:52:25 -- target/dif.sh@72 -- # (( file++ )) 00:31:12.703 01:52:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.703 01:52:25 -- nvmf/common.sh@544 -- # jq . 00:31:12.703 01:52:25 -- nvmf/common.sh@545 -- # IFS=, 00:31:12.703 01:52:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:12.703 "params": { 00:31:12.703 "name": "Nvme0", 00:31:12.703 "trtype": "tcp", 00:31:12.703 "traddr": "10.0.0.2", 00:31:12.703 "adrfam": "ipv4", 00:31:12.703 "trsvcid": "4420", 00:31:12.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.703 "hdgst": false, 00:31:12.703 "ddgst": false 00:31:12.703 }, 00:31:12.703 "method": "bdev_nvme_attach_controller" 00:31:12.703 },{ 00:31:12.703 "params": { 00:31:12.703 "name": "Nvme1", 00:31:12.703 "trtype": "tcp", 00:31:12.703 "traddr": "10.0.0.2", 00:31:12.703 "adrfam": "ipv4", 00:31:12.703 "trsvcid": "4420", 00:31:12.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:12.703 "hdgst": false, 00:31:12.703 "ddgst": false 00:31:12.703 }, 00:31:12.703 "method": "bdev_nvme_attach_controller" 00:31:12.703 }' 00:31:12.961 01:52:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:12.961 01:52:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:12.961 01:52:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.961 01:52:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.961 01:52:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:12.961 01:52:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:12.961 01:52:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:12.961 01:52:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:12.961 01:52:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:12.961 01:52:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.961 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:12.961 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:12.961 fio-3.35 00:31:12.961 Starting 2 threads 00:31:13.218 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.782 [2024-07-23 01:52:26.740057] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:13.782 [2024-07-23 01:52:26.740149] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:25.973 00:31:25.973 filename0: (groupid=0, jobs=1): err= 0: pid=3919406: Tue Jul 23 01:52:36 2024 00:31:25.973 read: IOPS=188, BW=754KiB/s (772kB/s)(7552KiB/10020msec) 00:31:25.973 slat (nsec): min=6984, max=62185, avg=9534.44, stdev=4154.16 00:31:25.973 clat (usec): min=833, max=46265, avg=21197.97, stdev=20121.56 00:31:25.973 lat (usec): min=840, max=46327, avg=21207.51, stdev=20121.08 00:31:25.973 clat percentiles (usec): 00:31:25.973 | 1.00th=[ 873], 5.00th=[ 889], 10.00th=[ 898], 20.00th=[ 914], 00:31:25.973 | 30.00th=[ 930], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:31:25.973 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:25.973 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:31:25.973 | 99.99th=[46400] 00:31:25.973 bw ( KiB/s): min= 672, max= 768, per=50.33%, avg=753.60, stdev=30.22, samples=20 00:31:25.973 iops : min= 168, max= 192, avg=188.40, stdev= 7.56, samples=20 00:31:25.973 lat (usec) : 1000=43.33% 00:31:25.973 lat (msec) : 2=6.46%, 50=50.21% 00:31:25.973 cpu : usr=94.27%, sys=5.43%, ctx=31, majf=0, minf=181 00:31:25.973 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.973 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.973 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:25.973 filename1: (groupid=0, jobs=1): err= 0: pid=3919407: Tue Jul 23 01:52:36 2024 00:31:25.973 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10019msec) 00:31:25.973 slat (nsec): min=6882, max=33690, avg=9487.80, stdev=3835.08 00:31:25.973 clat (usec): min=875, max=45198, avg=21515.89, stdev=20467.60 00:31:25.973 lat (usec): min=883, max=45229, avg=21525.38, stdev=20467.64 00:31:25.973 clat percentiles (usec): 00:31:25.973 | 1.00th=[ 930], 5.00th=[ 955], 10.00th=[ 963], 20.00th=[ 971], 00:31:25.973 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[41157], 60.00th=[41681], 00:31:25.973 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:25.973 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:31:25.973 | 99.99th=[45351] 00:31:25.973 bw ( KiB/s): min= 704, max= 768, per=49.59%, avg=742.40, stdev=30.45, samples=20 00:31:25.973 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:31:25.973 lat (usec) : 1000=32.96% 00:31:25.973 lat (msec) : 2=16.94%, 50=50.11% 00:31:25.973 cpu : usr=94.35%, sys=5.34%, ctx=25, majf=0, minf=141 00:31:25.973 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:25.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.973 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.973 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:25.973 00:31:25.973 Run status group 0 (all jobs): 00:31:25.973 READ: bw=1496KiB/s (1532kB/s), 743KiB/s-754KiB/s (760kB/s-772kB/s), io=14.6MiB (15.4MB), run=10019-10020msec 00:31:25.973 01:52:37 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:25.973 01:52:37 -- target/dif.sh@43 -- # local sub 00:31:25.973 01:52:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.973 01:52:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.973 01:52:37 -- target/dif.sh@36 -- # local sub_id=0 00:31:25.973 01:52:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.973 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.973 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.973 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.973 01:52:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.973 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.973 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.974 01:52:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:25.974 01:52:37 -- target/dif.sh@36 -- # local sub_id=1 00:31:25.974 01:52:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 00:31:25.974 real 0m11.459s 00:31:25.974 user 0m20.283s 00:31:25.974 sys 0m1.363s 00:31:25.974 01:52:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 ************************************ 00:31:25.974 END TEST fio_dif_1_multi_subsystems 00:31:25.974 ************************************ 00:31:25.974 01:52:37 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:25.974 01:52:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:25.974 01:52:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 ************************************ 00:31:25.974 START TEST fio_dif_rand_params 00:31:25.974 ************************************ 00:31:25.974 01:52:37 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:25.974 01:52:37 -- target/dif.sh@100 -- # local NULL_DIF 00:31:25.974 01:52:37 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:25.974 01:52:37 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:25.974 01:52:37 -- target/dif.sh@103 -- # bs=128k 00:31:25.974 01:52:37 -- target/dif.sh@103 -- # numjobs=3 00:31:25.974 01:52:37 -- target/dif.sh@103 -- # iodepth=3 00:31:25.974 01:52:37 -- target/dif.sh@103 -- # runtime=5 00:31:25.974 01:52:37 -- target/dif.sh@105 -- # create_subsystems 0 00:31:25.974 01:52:37 -- target/dif.sh@28 -- # local sub 00:31:25.974 01:52:37 -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.974 01:52:37 -- target/dif.sh@31 -- # create_subsystem 0 00:31:25.974 01:52:37 -- target/dif.sh@18 -- # local sub_id=0 00:31:25.974 01:52:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 bdev_null0 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.974 01:52:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.974 01:52:37 -- common/autotest_common.sh@10 -- # set +x 00:31:25.974 [2024-07-23 01:52:37.244117] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.974 01:52:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.974 01:52:37 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:25.974 01:52:37 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:25.974 01:52:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.974 01:52:37 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.974 01:52:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:25.974 01:52:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:25.974 01:52:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.974 01:52:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:25.974 01:52:37 -- nvmf/common.sh@520 -- # config=() 00:31:25.974 01:52:37 -- target/dif.sh@82 -- # gen_fio_conf 00:31:25.974 01:52:37 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.974 01:52:37 -- nvmf/common.sh@520 -- # local subsystem config 00:31:25.974 01:52:37 -- common/autotest_common.sh@1320 -- # shift 00:31:25.974 01:52:37 -- target/dif.sh@54 -- # local file 00:31:25.974 01:52:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:25.974 01:52:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:25.974 01:52:37 -- target/dif.sh@56 -- # cat 00:31:25.974 01:52:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.974 01:52:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:25.974 { 00:31:25.974 "params": { 00:31:25.974 "name": "Nvme$subsystem", 00:31:25.974 "trtype": "$TEST_TRANSPORT", 00:31:25.974 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.974 "adrfam": "ipv4", 00:31:25.974 "trsvcid": "$NVMF_PORT", 00:31:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.974 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.974 "hdgst": ${hdgst:-false}, 00:31:25.974 "ddgst": ${ddgst:-false} 00:31:25.974 }, 00:31:25.974 "method": "bdev_nvme_attach_controller" 00:31:25.974 } 00:31:25.974 EOF 00:31:25.974 )") 00:31:25.974 01:52:37 -- nvmf/common.sh@542 -- # cat 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:25.974 01:52:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:25.974 01:52:37 -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.974 01:52:37 -- nvmf/common.sh@544 -- # jq . 00:31:25.974 01:52:37 -- nvmf/common.sh@545 -- # IFS=, 00:31:25.974 01:52:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:25.974 "params": { 00:31:25.974 "name": "Nvme0", 00:31:25.974 "trtype": "tcp", 00:31:25.974 "traddr": "10.0.0.2", 00:31:25.974 "adrfam": "ipv4", 00:31:25.974 "trsvcid": "4420", 00:31:25.974 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:25.974 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:25.974 "hdgst": false, 00:31:25.974 "ddgst": false 00:31:25.974 }, 00:31:25.974 "method": "bdev_nvme_attach_controller" 00:31:25.974 }' 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:25.974 01:52:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:25.974 01:52:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:25.974 01:52:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:25.974 01:52:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:25.974 01:52:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:25.974 01:52:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.974 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:25.974 ... 00:31:25.974 fio-3.35 00:31:25.974 Starting 3 threads 00:31:25.974 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.974 [2024-07-23 01:52:38.009299] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.974 [2024-07-23 01:52:38.009368] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:30.186 00:31:30.186 filename0: (groupid=0, jobs=1): err= 0: pid=3920840: Tue Jul 23 01:52:43 2024 00:31:30.186 read: IOPS=188, BW=23.6MiB/s (24.7MB/s)(119MiB/5045msec) 00:31:30.186 slat (nsec): min=6177, max=43299, avg=12213.44, stdev=4165.47 00:31:30.186 clat (usec): min=5206, max=95320, avg=15833.26, stdev=14187.34 00:31:30.186 lat (usec): min=5217, max=95331, avg=15845.47, stdev=14187.55 00:31:30.186 clat percentiles (usec): 00:31:30.186 | 1.00th=[ 5866], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 8586], 00:31:30.186 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10814], 60.00th=[11994], 00:31:30.186 | 70.00th=[13173], 80.00th=[14353], 90.00th=[50070], 95.00th=[53216], 00:31:30.186 | 99.00th=[56886], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:31:30.186 | 99.99th=[94897] 00:31:30.186 bw ( KiB/s): min=17408, max=30976, per=31.78%, avg=24299.90, stdev=4275.68, samples=10 00:31:30.186 iops : min= 136, max= 242, avg=189.80, stdev=33.37, samples=10 00:31:30.186 lat (msec) : 10=40.65%, 20=46.95%, 50=2.00%, 100=10.40% 00:31:30.186 cpu : usr=91.38%, sys=8.09%, ctx=18, majf=0, minf=98 00:31:30.186 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.186 filename0: (groupid=0, jobs=1): err= 0: pid=3920841: Tue Jul 23 01:52:43 2024 00:31:30.186 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(135MiB/5031msec) 00:31:30.186 slat (nsec): min=4835, max=60980, avg=14452.26, stdev=6353.54 00:31:30.186 clat (usec): min=5308, max=90065, avg=13975.97, stdev=12666.88 00:31:30.186 lat (usec): min=5320, max=90085, avg=13990.43, stdev=12667.16 00:31:30.186 clat percentiles (usec): 00:31:30.186 | 1.00th=[ 5997], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 8029], 00:31:30.186 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10814], 00:31:30.186 | 70.00th=[11994], 80.00th=[13042], 90.00th=[16712], 95.00th=[51643], 00:31:30.186 | 99.00th=[54264], 99.50th=[55313], 99.90th=[57410], 99.95th=[89654], 00:31:30.186 | 99.99th=[89654] 00:31:30.186 bw ( KiB/s): min=14592, max=33536, per=35.99%, avg=27520.00, stdev=5869.11, samples=10 00:31:30.186 iops : min= 114, max= 262, avg=215.00, stdev=45.85, samples=10 00:31:30.186 lat (msec) : 10=52.78%, 20=37.29%, 50=2.41%, 100=7.51% 00:31:30.186 cpu : usr=89.62%, sys=9.26%, ctx=142, majf=0, minf=117 00:31:30.186 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.186 filename0: (groupid=0, jobs=1): err= 0: pid=3920842: Tue Jul 23 01:52:43 2024 00:31:30.186 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(123MiB/5045msec) 00:31:30.186 slat (nsec): min=4665, max=42884, avg=12908.05, stdev=4112.32 00:31:30.186 clat (usec): min=5529, max=95100, avg=15316.70, stdev=13906.64 00:31:30.186 lat (usec): min=5541, max=95113, avg=15329.60, stdev=13906.63 00:31:30.186 clat percentiles (usec): 00:31:30.186 | 1.00th=[ 5932], 5.00th=[ 6652], 10.00th=[ 7111], 20.00th=[ 8717], 00:31:30.186 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11863], 00:31:30.186 | 70.00th=[12911], 80.00th=[14091], 90.00th=[49021], 95.00th=[51643], 00:31:30.186 | 99.00th=[55313], 99.50th=[57934], 99.90th=[94897], 99.95th=[94897], 00:31:30.186 | 99.99th=[94897] 00:31:30.186 bw ( KiB/s): min=17920, max=32256, per=32.87%, avg=25139.20, stdev=4448.48, samples=10 00:31:30.186 iops : min= 140, max= 252, avg=196.40, stdev=34.75, samples=10 00:31:30.186 lat (msec) : 10=41.26%, 20=47.66%, 50=1.93%, 100=9.15% 00:31:30.186 cpu : usr=91.26%, sys=8.21%, ctx=13, majf=0, minf=106 00:31:30.186 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.186 issued rwts: total=984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.186 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:30.186 00:31:30.186 Run status group 0 (all jobs): 00:31:30.186 READ: bw=74.7MiB/s (78.3MB/s), 23.6MiB/s-26.8MiB/s (24.7MB/s-28.1MB/s), io=377MiB (395MB), run=5031-5045msec 00:31:30.444 01:52:43 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:30.444 01:52:43 -- target/dif.sh@43 -- # local sub 00:31:30.444 01:52:43 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.444 01:52:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.444 01:52:43 -- target/dif.sh@36 -- # local sub_id=0 00:31:30.444 01:52:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # bs=4k 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # numjobs=8 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # iodepth=16 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # runtime= 00:31:30.444 01:52:43 -- target/dif.sh@109 -- # files=2 00:31:30.444 01:52:43 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:30.444 01:52:43 -- target/dif.sh@28 -- # local sub 00:31:30.444 01:52:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.444 01:52:43 -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.444 01:52:43 -- target/dif.sh@18 -- # local sub_id=0 00:31:30.444 01:52:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 bdev_null0 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 [2024-07-23 01:52:43.489774] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.444 01:52:43 -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.444 01:52:43 -- target/dif.sh@18 -- # local sub_id=1 00:31:30.444 01:52:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 bdev_null1 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.444 01:52:43 -- target/dif.sh@31 -- # create_subsystem 2 00:31:30.444 01:52:43 -- target/dif.sh@18 -- # local sub_id=2 00:31:30.444 01:52:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 bdev_null2 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.444 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.444 01:52:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:30.444 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.444 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.704 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.704 01:52:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:30.704 01:52:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.704 01:52:43 -- common/autotest_common.sh@10 -- # set +x 00:31:30.704 01:52:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.704 01:52:43 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:30.704 01:52:43 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:30.704 01:52:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:30.704 01:52:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.704 01:52:43 -- nvmf/common.sh@520 -- # config=() 00:31:30.704 01:52:43 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.704 01:52:43 -- nvmf/common.sh@520 -- # local subsystem config 00:31:30.704 01:52:43 -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.704 01:52:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:30.704 01:52:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.704 01:52:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.704 01:52:43 -- target/dif.sh@54 -- # local file 00:31:30.704 01:52:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.704 { 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme$subsystem", 00:31:30.704 "trtype": "$TEST_TRANSPORT", 00:31:30.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "$NVMF_PORT", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.704 "hdgst": ${hdgst:-false}, 00:31:30.704 "ddgst": ${ddgst:-false} 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 } 00:31:30.704 EOF 00:31:30.704 )") 00:31:30.704 01:52:43 -- target/dif.sh@56 -- # cat 00:31:30.704 01:52:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.704 01:52:43 -- common/autotest_common.sh@1320 -- # shift 00:31:30.704 01:52:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:30.704 01:52:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # cat 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.704 01:52:43 -- target/dif.sh@73 -- # cat 00:31:30.704 01:52:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.704 { 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme$subsystem", 00:31:30.704 "trtype": "$TEST_TRANSPORT", 00:31:30.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "$NVMF_PORT", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.704 "hdgst": ${hdgst:-false}, 00:31:30.704 "ddgst": ${ddgst:-false} 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 } 00:31:30.704 EOF 00:31:30.704 )") 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file++ )) 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.704 01:52:43 -- target/dif.sh@73 -- # cat 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # cat 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file++ )) 00:31:30.704 01:52:43 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.704 01:52:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.704 { 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme$subsystem", 00:31:30.704 "trtype": "$TEST_TRANSPORT", 00:31:30.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "$NVMF_PORT", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.704 "hdgst": ${hdgst:-false}, 00:31:30.704 "ddgst": ${ddgst:-false} 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 } 00:31:30.704 EOF 00:31:30.704 )") 00:31:30.704 01:52:43 -- nvmf/common.sh@542 -- # cat 00:31:30.704 01:52:43 -- nvmf/common.sh@544 -- # jq . 00:31:30.704 01:52:43 -- nvmf/common.sh@545 -- # IFS=, 00:31:30.704 01:52:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme0", 00:31:30.704 "trtype": "tcp", 00:31:30.704 "traddr": "10.0.0.2", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "4420", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.704 "hdgst": false, 00:31:30.704 "ddgst": false 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 },{ 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme1", 00:31:30.704 "trtype": "tcp", 00:31:30.704 "traddr": "10.0.0.2", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "4420", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.704 "hdgst": false, 00:31:30.704 "ddgst": false 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 },{ 00:31:30.704 "params": { 00:31:30.704 "name": "Nvme2", 00:31:30.704 "trtype": "tcp", 00:31:30.704 "traddr": "10.0.0.2", 00:31:30.704 "adrfam": "ipv4", 00:31:30.704 "trsvcid": "4420", 00:31:30.704 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:30.704 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:30.704 "hdgst": false, 00:31:30.704 "ddgst": false 00:31:30.704 }, 00:31:30.704 "method": "bdev_nvme_attach_controller" 00:31:30.704 }' 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:30.704 01:52:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:30.704 01:52:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:30.704 01:52:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:30.704 01:52:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:30.704 01:52:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.704 01:52:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.962 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.962 ... 00:31:30.962 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.962 ... 00:31:30.962 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:30.962 ... 00:31:30.962 fio-3.35 00:31:30.962 Starting 24 threads 00:31:30.962 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.904 [2024-07-23 01:52:44.680321] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:31.904 [2024-07-23 01:52:44.680398] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:41.914 00:31:41.914 filename0: (groupid=0, jobs=1): err= 0: pid=3921734: Tue Jul 23 01:52:54 2024 00:31:41.914 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10045msec) 00:31:41.914 slat (nsec): min=8537, max=71182, avg=36206.56, stdev=11054.86 00:31:41.914 clat (msec): min=45, max=275, avg=227.93, stdev=37.31 00:31:41.915 lat (msec): min=45, max=275, avg=227.97, stdev=37.31 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 47], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.915 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 268], 00:31:41.915 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:31:41.915 | 99.99th=[ 275] 00:31:41.915 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.20, stdev=46.89, samples=20 00:31:41.915 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:31:41.915 lat (msec) : 50=2.27%, 250=84.09%, 500=13.64% 00:31:41.915 cpu : usr=96.01%, sys=2.34%, ctx=57, majf=0, minf=9 00:31:41.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921735: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10046msec) 00:31:41.915 slat (usec): min=7, max=107, avg=44.32, stdev=19.57 00:31:41.915 clat (msec): min=119, max=271, avg=227.95, stdev=29.82 00:31:41.915 lat (msec): min=120, max=271, avg=228.00, stdev=29.83 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 121], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 192], 00:31:41.915 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 271], 00:31:41.915 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:31:41.915 | 99.99th=[ 271] 00:31:41.915 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=275.20, stdev=62.64, samples=20 00:31:41.915 iops : min= 32, max= 96, avg=68.80, stdev=15.66, samples=20 00:31:41.915 lat (msec) : 250=88.64%, 500=11.36% 00:31:41.915 cpu : usr=97.12%, sys=1.92%, ctx=39, majf=0, minf=9 00:31:41.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921736: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=68, BW=275KiB/s (281kB/s)(2752KiB/10024msec) 00:31:41.915 slat (nsec): min=8146, max=69695, avg=27929.13, stdev=8170.78 00:31:41.915 clat (msec): min=101, max=405, avg=232.67, stdev=35.93 00:31:41.915 lat (msec): min=101, max=405, avg=232.70, stdev=35.93 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 117], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 197], 00:31:41.915 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.915 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 271], 95.00th=[ 271], 00:31:41.915 | 99.00th=[ 388], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:31:41.915 | 99.99th=[ 405] 00:31:41.915 bw ( KiB/s): min= 240, max= 384, per=4.04%, avg=275.37, stdev=44.49, samples=19 00:31:41.915 iops : min= 60, max= 96, avg=68.84, stdev=11.12, samples=19 00:31:41.915 lat (msec) : 250=82.85%, 500=17.15% 00:31:41.915 cpu : usr=97.97%, sys=1.72%, ctx=24, majf=0, minf=9 00:31:41.915 IO depths : 1=2.3%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.2%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921737: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10045msec) 00:31:41.915 slat (nsec): min=10416, max=61881, avg=28552.89, stdev=8427.53 00:31:41.915 clat (msec): min=45, max=274, avg=228.02, stdev=37.35 00:31:41.915 lat (msec): min=45, max=275, avg=228.05, stdev=37.35 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 46], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.915 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 271], 00:31:41.915 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:31:41.915 | 99.99th=[ 275] 00:31:41.915 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.20, stdev=46.89, samples=20 00:31:41.915 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:31:41.915 lat (msec) : 50=2.27%, 250=84.09%, 500=13.64% 00:31:41.915 cpu : usr=97.71%, sys=1.79%, ctx=33, majf=0, minf=9 00:31:41.915 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921738: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=69, BW=280KiB/s (287kB/s)(2808KiB/10035msec) 00:31:41.915 slat (usec): min=14, max=116, avg=35.62, stdev=14.57 00:31:41.915 clat (msec): min=45, max=302, avg=228.37, stdev=37.19 00:31:41.915 lat (msec): min=46, max=302, avg=228.41, stdev=37.19 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 47], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 197], 00:31:41.915 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 268], 95.00th=[ 268], 00:31:41.915 | 99.00th=[ 279], 99.50th=[ 300], 99.90th=[ 305], 99.95th=[ 305], 00:31:41.915 | 99.99th=[ 305] 00:31:41.915 bw ( KiB/s): min= 237, max= 384, per=4.02%, avg=274.25, stdev=45.46, samples=20 00:31:41.915 iops : min= 59, max= 96, avg=68.55, stdev=11.38, samples=20 00:31:41.915 lat (msec) : 50=1.99%, 250=80.91%, 500=17.09% 00:31:41.915 cpu : usr=98.27%, sys=1.27%, ctx=123, majf=0, minf=9 00:31:41.915 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921739: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=71, BW=288KiB/s (294kB/s)(2880KiB/10015msec) 00:31:41.915 slat (usec): min=5, max=189, avg=73.69, stdev=27.10 00:31:41.915 clat (msec): min=18, max=268, avg=221.91, stdev=44.50 00:31:41.915 lat (msec): min=18, max=268, avg=221.99, stdev=44.52 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 19], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 190], 00:31:41.915 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 268], 00:31:41.915 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:31:41.915 | 99.99th=[ 271] 00:31:41.915 bw ( KiB/s): min= 128, max= 512, per=4.12%, avg=281.60, stdev=78.80, samples=20 00:31:41.915 iops : min= 32, max= 128, avg=70.40, stdev=19.70, samples=20 00:31:41.915 lat (msec) : 20=1.81%, 50=0.42%, 100=2.22%, 250=84.44%, 500=11.11% 00:31:41.915 cpu : usr=95.25%, sys=2.73%, ctx=191, majf=0, minf=9 00:31:41.915 IO depths : 1=6.1%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921740: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=68, BW=275KiB/s (281kB/s)(2752KiB/10013msec) 00:31:41.915 slat (usec): min=11, max=103, avg=57.62, stdev=24.70 00:31:41.915 clat (msec): min=110, max=373, avg=232.32, stdev=34.39 00:31:41.915 lat (msec): min=110, max=373, avg=232.37, stdev=34.39 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 138], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 197], 00:31:41.915 | 30.00th=[ 232], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.915 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 271], 95.00th=[ 271], 00:31:41.915 | 99.00th=[ 338], 99.50th=[ 342], 99.90th=[ 372], 99.95th=[ 372], 00:31:41.915 | 99.99th=[ 372] 00:31:41.915 bw ( KiB/s): min= 240, max= 384, per=4.04%, avg=275.37, stdev=49.34, samples=19 00:31:41.915 iops : min= 60, max= 96, avg=68.84, stdev=12.33, samples=19 00:31:41.915 lat (msec) : 250=83.43%, 500=16.57% 00:31:41.915 cpu : usr=96.32%, sys=2.26%, ctx=41, majf=0, minf=9 00:31:41.915 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:41.915 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.915 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.915 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.915 filename0: (groupid=0, jobs=1): err= 0: pid=3921741: Tue Jul 23 01:52:54 2024 00:31:41.915 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10042msec) 00:31:41.915 slat (nsec): min=7948, max=85974, avg=33515.23, stdev=14379.65 00:31:41.915 clat (msec): min=102, max=373, avg=227.77, stdev=44.88 00:31:41.915 lat (msec): min=102, max=373, avg=227.80, stdev=44.88 00:31:41.915 clat percentiles (msec): 00:31:41.915 | 1.00th=[ 108], 5.00th=[ 140], 10.00th=[ 178], 20.00th=[ 188], 00:31:41.915 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.915 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 271], 95.00th=[ 279], 00:31:41.915 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:31:41.915 | 99.99th=[ 376] 00:31:41.915 bw ( KiB/s): min= 144, max= 384, per=4.04%, avg=275.20, stdev=59.32, samples=20 00:31:41.916 iops : min= 36, max= 96, avg=68.80, stdev=14.83, samples=20 00:31:41.916 lat (msec) : 250=84.09%, 500=15.91% 00:31:41.916 cpu : usr=98.05%, sys=1.59%, ctx=16, majf=0, minf=9 00:31:41.916 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921742: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=68, BW=275KiB/s (281kB/s)(2752KiB/10018msec) 00:31:41.916 slat (usec): min=5, max=118, avg=60.01, stdev=19.96 00:31:41.916 clat (msec): min=110, max=361, avg=232.48, stdev=39.31 00:31:41.916 lat (msec): min=110, max=361, avg=232.54, stdev=39.32 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 146], 5.00th=[ 167], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.916 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.916 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 271], 95.00th=[ 279], 00:31:41.916 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 363], 99.95th=[ 363], 00:31:41.916 | 99.99th=[ 363] 00:31:41.916 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=268.75, stdev=55.58, samples=20 00:31:41.916 iops : min= 32, max= 96, avg=67.15, stdev=13.91, samples=20 00:31:41.916 lat (msec) : 250=81.69%, 500=18.31% 00:31:41.916 cpu : usr=98.43%, sys=1.13%, ctx=17, majf=0, minf=9 00:31:41.916 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921743: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10043msec) 00:31:41.916 slat (nsec): min=10886, max=97455, avg=56702.77, stdev=20670.55 00:31:41.916 clat (msec): min=100, max=269, avg=227.77, stdev=26.55 00:31:41.916 lat (msec): min=100, max=269, avg=227.83, stdev=26.55 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:31:41.916 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 241], 60.00th=[ 241], 00:31:41.916 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 259], 00:31:41.916 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:31:41.916 | 99.99th=[ 271] 00:31:41.916 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.20, stdev=46.89, samples=20 00:31:41.916 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:31:41.916 lat (msec) : 250=90.62%, 500=9.38% 00:31:41.916 cpu : usr=98.01%, sys=1.52%, ctx=57, majf=0, minf=9 00:31:41.916 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921744: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=69, BW=280KiB/s (286kB/s)(2808KiB/10037msec) 00:31:41.916 slat (usec): min=10, max=288, avg=61.08, stdev=43.64 00:31:41.916 clat (msec): min=46, max=302, avg=228.20, stdev=37.28 00:31:41.916 lat (msec): min=46, max=302, avg=228.26, stdev=37.28 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 47], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 201], 00:31:41.916 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.916 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 268], 95.00th=[ 268], 00:31:41.916 | 99.00th=[ 275], 99.50th=[ 305], 99.90th=[ 305], 99.95th=[ 305], 00:31:41.916 | 99.99th=[ 305] 00:31:41.916 bw ( KiB/s): min= 235, max= 384, per=4.02%, avg=274.15, stdev=45.55, samples=20 00:31:41.916 iops : min= 58, max= 96, avg=68.50, stdev=11.42, samples=20 00:31:41.916 lat (msec) : 50=1.99%, 250=80.91%, 500=17.09% 00:31:41.916 cpu : usr=95.20%, sys=2.70%, ctx=116, majf=0, minf=9 00:31:41.916 IO depths : 1=3.4%, 2=9.7%, 4=25.1%, 8=52.8%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921745: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=75, BW=301KiB/s (308kB/s)(3016KiB/10022msec) 00:31:41.916 slat (usec): min=11, max=107, avg=62.33, stdev=16.46 00:31:41.916 clat (msec): min=44, max=389, avg=212.34, stdev=50.54 00:31:41.916 lat (msec): min=44, max=389, avg=212.40, stdev=50.54 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 74], 5.00th=[ 129], 10.00th=[ 159], 20.00th=[ 174], 00:31:41.916 | 30.00th=[ 182], 40.00th=[ 192], 50.00th=[ 226], 60.00th=[ 236], 00:31:41.916 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 268], 95.00th=[ 275], 00:31:41.916 | 99.00th=[ 376], 99.50th=[ 388], 99.90th=[ 388], 99.95th=[ 388], 00:31:41.916 | 99.99th=[ 388] 00:31:41.916 bw ( KiB/s): min= 256, max= 384, per=4.36%, avg=297.26, stdev=50.96, samples=19 00:31:41.916 iops : min= 64, max= 96, avg=74.32, stdev=12.74, samples=19 00:31:41.916 lat (msec) : 50=0.27%, 100=1.59%, 250=81.70%, 500=16.45% 00:31:41.916 cpu : usr=98.19%, sys=1.24%, ctx=75, majf=0, minf=9 00:31:41.916 IO depths : 1=1.1%, 2=3.2%, 4=10.5%, 8=71.8%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=90.8%, 8=5.6%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921746: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10033msec) 00:31:41.916 slat (nsec): min=8126, max=87290, avg=27174.45, stdev=9643.38 00:31:41.916 clat (msec): min=132, max=339, avg=227.78, stdev=32.61 00:31:41.916 lat (msec): min=132, max=339, avg=227.81, stdev=32.61 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 133], 5.00th=[ 163], 10.00th=[ 182], 20.00th=[ 190], 00:31:41.916 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.916 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 259], 95.00th=[ 271], 00:31:41.916 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 338], 99.95th=[ 338], 00:31:41.916 | 99.99th=[ 338] 00:31:41.916 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=275.20, stdev=62.64, samples=20 00:31:41.916 iops : min= 32, max= 96, avg=68.80, stdev=15.66, samples=20 00:31:41.916 lat (msec) : 250=86.08%, 500=13.92% 00:31:41.916 cpu : usr=97.53%, sys=1.63%, ctx=27, majf=0, minf=9 00:31:41.916 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921747: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10055msec) 00:31:41.916 slat (nsec): min=7902, max=82334, avg=35575.03, stdev=14789.79 00:31:41.916 clat (msec): min=98, max=374, avg=228.05, stdev=44.37 00:31:41.916 lat (msec): min=98, max=374, avg=228.08, stdev=44.37 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 105], 5.00th=[ 140], 10.00th=[ 182], 20.00th=[ 190], 00:31:41.916 | 30.00th=[ 226], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.916 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 271], 95.00th=[ 279], 00:31:41.916 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:31:41.916 | 99.99th=[ 376] 00:31:41.916 bw ( KiB/s): min= 144, max= 384, per=4.04%, avg=275.20, stdev=57.71, samples=20 00:31:41.916 iops : min= 36, max= 96, avg=68.80, stdev=14.43, samples=20 00:31:41.916 lat (msec) : 100=0.28%, 250=83.81%, 500=15.91% 00:31:41.916 cpu : usr=98.20%, sys=1.37%, ctx=33, majf=0, minf=9 00:31:41.916 IO depths : 1=3.1%, 2=9.2%, 4=24.9%, 8=53.4%, 16=9.4%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.916 filename1: (groupid=0, jobs=1): err= 0: pid=3921748: Tue Jul 23 01:52:54 2024 00:31:41.916 read: IOPS=95, BW=382KiB/s (391kB/s)(3840KiB/10058msec) 00:31:41.916 slat (usec): min=8, max=283, avg=48.71, stdev=29.76 00:31:41.916 clat (msec): min=68, max=297, avg=167.19, stdev=33.40 00:31:41.916 lat (msec): min=68, max=297, avg=167.24, stdev=33.40 00:31:41.916 clat percentiles (msec): 00:31:41.916 | 1.00th=[ 69], 5.00th=[ 123], 10.00th=[ 128], 20.00th=[ 133], 00:31:41.916 | 30.00th=[ 146], 40.00th=[ 165], 50.00th=[ 176], 60.00th=[ 182], 00:31:41.916 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 207], 95.00th=[ 215], 00:31:41.916 | 99.00th=[ 228], 99.50th=[ 288], 99.90th=[ 300], 99.95th=[ 300], 00:31:41.916 | 99.99th=[ 300] 00:31:41.916 bw ( KiB/s): min= 256, max= 464, per=5.53%, avg=377.60, stdev=43.25, samples=20 00:31:41.916 iops : min= 64, max= 116, avg=94.40, stdev=10.81, samples=20 00:31:41.916 lat (msec) : 100=1.46%, 250=97.71%, 500=0.83% 00:31:41.916 cpu : usr=97.18%, sys=1.87%, ctx=42, majf=0, minf=9 00:31:41.916 IO depths : 1=1.1%, 2=3.5%, 4=13.4%, 8=70.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:41.916 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 complete : 0=0.0%, 4=90.8%, 8=3.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.916 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.916 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename1: (groupid=0, jobs=1): err= 0: pid=3921749: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=68, BW=275KiB/s (281kB/s)(2752KiB/10015msec) 00:31:41.917 slat (usec): min=10, max=153, avg=56.32, stdev=23.02 00:31:41.917 clat (msec): min=140, max=410, avg=232.44, stdev=31.65 00:31:41.917 lat (msec): min=140, max=410, avg=232.50, stdev=31.65 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 140], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 207], 00:31:41.917 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.917 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 271], 95.00th=[ 271], 00:31:41.917 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 409], 99.95th=[ 409], 00:31:41.917 | 99.99th=[ 409] 00:31:41.917 bw ( KiB/s): min= 240, max= 384, per=4.05%, avg=276.21, stdev=46.14, samples=19 00:31:41.917 iops : min= 60, max= 96, avg=69.05, stdev=11.53, samples=19 00:31:41.917 lat (msec) : 250=83.43%, 500=16.57% 00:31:41.917 cpu : usr=97.01%, sys=1.90%, ctx=77, majf=0, minf=9 00:31:41.917 IO depths : 1=2.3%, 2=8.4%, 4=24.6%, 8=54.5%, 16=10.2%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921750: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=68, BW=275KiB/s (281kB/s)(2752KiB/10022msec) 00:31:41.917 slat (usec): min=8, max=241, avg=45.44, stdev=36.56 00:31:41.917 clat (msec): min=99, max=373, avg=232.51, stdev=42.26 00:31:41.917 lat (msec): min=99, max=373, avg=232.55, stdev=42.26 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 105], 5.00th=[ 148], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.917 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.917 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 271], 95.00th=[ 284], 00:31:41.917 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 376], 99.95th=[ 376], 00:31:41.917 | 99.99th=[ 376] 00:31:41.917 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.37, stdev=46.06, samples=19 00:31:41.917 iops : min= 64, max= 96, avg=68.84, stdev=11.51, samples=19 00:31:41.917 lat (msec) : 100=0.29%, 250=81.10%, 500=18.60% 00:31:41.917 cpu : usr=96.24%, sys=2.25%, ctx=113, majf=0, minf=9 00:31:41.917 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921751: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10033msec) 00:31:41.917 slat (usec): min=9, max=150, avg=34.98, stdev=22.10 00:31:41.917 clat (msec): min=107, max=271, avg=227.72, stdev=30.92 00:31:41.917 lat (msec): min=107, max=271, avg=227.76, stdev=30.92 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 108], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 192], 00:31:41.917 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.917 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 271], 00:31:41.917 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:31:41.917 | 99.99th=[ 271] 00:31:41.917 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=275.20, stdev=62.64, samples=20 00:31:41.917 iops : min= 32, max= 96, avg=68.80, stdev=15.66, samples=20 00:31:41.917 lat (msec) : 250=88.64%, 500=11.36% 00:31:41.917 cpu : usr=96.80%, sys=2.10%, ctx=16, majf=0, minf=9 00:31:41.917 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921752: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=70, BW=281KiB/s (287kB/s)(2816KiB/10034msec) 00:31:41.917 slat (usec): min=14, max=107, avg=42.06, stdev=19.24 00:31:41.917 clat (msec): min=45, max=282, avg=227.70, stdev=37.40 00:31:41.917 lat (msec): min=45, max=282, avg=227.74, stdev=37.40 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 46], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.917 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.917 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 268], 00:31:41.917 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 284], 99.95th=[ 284], 00:31:41.917 | 99.99th=[ 284] 00:31:41.917 bw ( KiB/s): min= 252, max= 384, per=4.04%, avg=275.00, stdev=46.99, samples=20 00:31:41.917 iops : min= 63, max= 96, avg=68.75, stdev=11.75, samples=20 00:31:41.917 lat (msec) : 50=2.27%, 250=82.95%, 500=14.77% 00:31:41.917 cpu : usr=96.90%, sys=2.08%, ctx=20, majf=0, minf=9 00:31:41.917 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921753: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=71, BW=288KiB/s (294kB/s)(2880KiB/10014msec) 00:31:41.917 slat (usec): min=4, max=261, avg=58.05, stdev=37.97 00:31:41.917 clat (msec): min=7, max=269, avg=222.01, stdev=44.78 00:31:41.917 lat (msec): min=7, max=269, avg=222.06, stdev=44.79 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 19], 5.00th=[ 171], 10.00th=[ 182], 20.00th=[ 190], 00:31:41.917 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.917 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 268], 00:31:41.917 | 99.00th=[ 271], 99.50th=[ 271], 99.90th=[ 271], 99.95th=[ 271], 00:31:41.917 | 99.99th=[ 271] 00:31:41.917 bw ( KiB/s): min= 128, max= 512, per=4.12%, avg=281.60, stdev=78.80, samples=20 00:31:41.917 iops : min= 32, max= 128, avg=70.40, stdev=19.70, samples=20 00:31:41.917 lat (msec) : 10=0.97%, 20=1.25%, 100=1.94%, 250=84.72%, 500=11.11% 00:31:41.917 cpu : usr=96.78%, sys=2.14%, ctx=44, majf=0, minf=9 00:31:41.917 IO depths : 1=5.8%, 2=11.9%, 4=24.4%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921754: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10045msec) 00:31:41.917 slat (usec): min=10, max=183, avg=71.82, stdev=23.35 00:31:41.917 clat (msec): min=45, max=275, avg=227.64, stdev=37.49 00:31:41.917 lat (msec): min=45, max=275, avg=227.71, stdev=37.50 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 46], 5.00th=[ 182], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.917 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.917 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 262], 95.00th=[ 268], 00:31:41.917 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:31:41.917 | 99.99th=[ 275] 00:31:41.917 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.20, stdev=46.89, samples=20 00:31:41.917 iops : min= 64, max= 96, avg=68.80, stdev=11.72, samples=20 00:31:41.917 lat (msec) : 50=2.27%, 250=83.52%, 500=14.20% 00:31:41.917 cpu : usr=94.34%, sys=3.13%, ctx=168, majf=0, minf=9 00:31:41.917 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921755: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10045msec) 00:31:41.917 slat (nsec): min=8421, max=83892, avg=36072.12, stdev=12738.26 00:31:41.917 clat (msec): min=45, max=291, avg=227.96, stdev=37.71 00:31:41.917 lat (msec): min=45, max=291, avg=227.99, stdev=37.71 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 46], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 209], 00:31:41.917 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.917 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 264], 95.00th=[ 268], 00:31:41.917 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 292], 99.95th=[ 292], 00:31:41.917 | 99.99th=[ 292] 00:31:41.917 bw ( KiB/s): min= 256, max= 384, per=4.04%, avg=275.20, stdev=44.84, samples=20 00:31:41.917 iops : min= 64, max= 96, avg=68.80, stdev=11.21, samples=20 00:31:41.917 lat (msec) : 50=2.27%, 250=82.67%, 500=15.06% 00:31:41.917 cpu : usr=98.00%, sys=1.48%, ctx=28, majf=0, minf=9 00:31:41.917 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:41.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.917 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.917 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.917 filename2: (groupid=0, jobs=1): err= 0: pid=3921756: Tue Jul 23 01:52:54 2024 00:31:41.917 read: IOPS=68, BW=274KiB/s (281kB/s)(2752KiB/10026msec) 00:31:41.917 slat (usec): min=5, max=110, avg=51.33, stdev=25.30 00:31:41.917 clat (msec): min=99, max=372, avg=232.54, stdev=42.25 00:31:41.917 lat (msec): min=99, max=372, avg=232.59, stdev=42.26 00:31:41.917 clat percentiles (msec): 00:31:41.917 | 1.00th=[ 105], 5.00th=[ 148], 10.00th=[ 182], 20.00th=[ 192], 00:31:41.918 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 243], 00:31:41.918 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 271], 95.00th=[ 288], 00:31:41.918 | 99.00th=[ 342], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:31:41.918 | 99.99th=[ 372] 00:31:41.918 bw ( KiB/s): min= 144, max= 384, per=3.93%, avg=268.80, stdev=53.60, samples=20 00:31:41.918 iops : min= 36, max= 96, avg=67.20, stdev=13.40, samples=20 00:31:41.918 lat (msec) : 100=0.29%, 250=81.10%, 500=18.60% 00:31:41.918 cpu : usr=97.43%, sys=1.79%, ctx=110, majf=0, minf=9 00:31:41.918 IO depths : 1=3.2%, 2=9.4%, 4=25.0%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:41.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.918 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.918 issued rwts: total=688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.918 filename2: (groupid=0, jobs=1): err= 0: pid=3921757: Tue Jul 23 01:52:54 2024 00:31:41.918 read: IOPS=70, BW=280KiB/s (287kB/s)(2816KiB/10057msec) 00:31:41.918 slat (nsec): min=6319, max=95623, avg=52180.37, stdev=15054.37 00:31:41.918 clat (msec): min=111, max=337, avg=227.93, stdev=31.77 00:31:41.918 lat (msec): min=111, max=337, avg=227.98, stdev=31.78 00:31:41.918 clat percentiles (msec): 00:31:41.918 | 1.00th=[ 124], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 192], 00:31:41.918 | 30.00th=[ 228], 40.00th=[ 236], 50.00th=[ 239], 60.00th=[ 241], 00:31:41.918 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 253], 95.00th=[ 271], 00:31:41.918 | 99.00th=[ 271], 99.50th=[ 330], 99.90th=[ 338], 99.95th=[ 338], 00:31:41.918 | 99.99th=[ 338] 00:31:41.918 bw ( KiB/s): min= 128, max= 384, per=4.04%, avg=275.20, stdev=62.64, samples=20 00:31:41.918 iops : min= 32, max= 96, avg=68.80, stdev=15.66, samples=20 00:31:41.918 lat (msec) : 250=87.78%, 500=12.22% 00:31:41.918 cpu : usr=97.87%, sys=1.65%, ctx=48, majf=0, minf=9 00:31:41.918 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:41.918 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.918 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.918 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.918 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:41.918 00:31:41.918 Run status group 0 (all jobs): 00:31:41.918 READ: bw=6814KiB/s (6978kB/s), 274KiB/s-382KiB/s (281kB/s-391kB/s), io=66.9MiB (70.2MB), run=10013-10058msec 00:31:42.176 01:52:55 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:42.176 01:52:55 -- target/dif.sh@43 -- # local sub 00:31:42.176 01:52:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.176 01:52:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:42.176 01:52:55 -- target/dif.sh@36 -- # local sub_id=0 00:31:42.176 01:52:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.176 01:52:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:42.176 01:52:55 -- target/dif.sh@36 -- # local sub_id=1 00:31:42.176 01:52:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@45 -- # for sub in "$@" 00:31:42.176 01:52:55 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:42.176 01:52:55 -- target/dif.sh@36 -- # local sub_id=2 00:31:42.176 01:52:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # numjobs=2 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # iodepth=8 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # runtime=5 00:31:42.176 01:52:55 -- target/dif.sh@115 -- # files=1 00:31:42.176 01:52:55 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:42.176 01:52:55 -- target/dif.sh@28 -- # local sub 00:31:42.176 01:52:55 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.176 01:52:55 -- target/dif.sh@31 -- # create_subsystem 0 00:31:42.176 01:52:55 -- target/dif.sh@18 -- # local sub_id=0 00:31:42.176 01:52:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 bdev_null0 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.176 01:52:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.176 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.176 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.176 [2024-07-23 01:52:55.254832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.177 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.177 01:52:55 -- target/dif.sh@30 -- # for sub in "$@" 00:31:42.177 01:52:55 -- target/dif.sh@31 -- # create_subsystem 1 00:31:42.177 01:52:55 -- target/dif.sh@18 -- # local sub_id=1 00:31:42.177 01:52:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:42.177 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.177 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.177 bdev_null1 00:31:42.177 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.177 01:52:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:42.177 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.177 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.435 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.435 01:52:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:42.435 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.435 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.435 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.435 01:52:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.435 01:52:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:42.435 01:52:55 -- common/autotest_common.sh@10 -- # set +x 00:31:42.435 01:52:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:42.435 01:52:55 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:42.435 01:52:55 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:42.435 01:52:55 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:42.435 01:52:55 -- nvmf/common.sh@520 -- # config=() 00:31:42.435 01:52:55 -- nvmf/common.sh@520 -- # local subsystem config 00:31:42.435 01:52:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.435 01:52:55 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.435 01:52:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.435 { 00:31:42.435 "params": { 00:31:42.435 "name": "Nvme$subsystem", 00:31:42.435 "trtype": "$TEST_TRANSPORT", 00:31:42.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.435 "adrfam": "ipv4", 00:31:42.435 "trsvcid": "$NVMF_PORT", 00:31:42.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.435 "hdgst": ${hdgst:-false}, 00:31:42.435 "ddgst": ${ddgst:-false} 00:31:42.435 }, 00:31:42.435 "method": "bdev_nvme_attach_controller" 00:31:42.435 } 00:31:42.435 EOF 00:31:42.435 )") 00:31:42.435 01:52:55 -- target/dif.sh@82 -- # gen_fio_conf 00:31:42.435 01:52:55 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.435 01:52:55 -- target/dif.sh@54 -- # local file 00:31:42.435 01:52:55 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:42.435 01:52:55 -- target/dif.sh@56 -- # cat 00:31:42.435 01:52:55 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:42.435 01:52:55 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:42.435 01:52:55 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.435 01:52:55 -- common/autotest_common.sh@1320 -- # shift 00:31:42.435 01:52:55 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:42.435 01:52:55 -- nvmf/common.sh@542 -- # cat 00:31:42.435 01:52:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.435 01:52:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.435 01:52:55 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:42.435 01:52:55 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.435 01:52:55 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:42.435 01:52:55 -- target/dif.sh@73 -- # cat 00:31:42.435 01:52:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:42.435 01:52:55 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:42.435 01:52:55 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:42.435 { 00:31:42.435 "params": { 00:31:42.435 "name": "Nvme$subsystem", 00:31:42.435 "trtype": "$TEST_TRANSPORT", 00:31:42.435 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:42.435 "adrfam": "ipv4", 00:31:42.435 "trsvcid": "$NVMF_PORT", 00:31:42.435 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:42.435 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:42.435 "hdgst": ${hdgst:-false}, 00:31:42.435 "ddgst": ${ddgst:-false} 00:31:42.435 }, 00:31:42.435 "method": "bdev_nvme_attach_controller" 00:31:42.435 } 00:31:42.435 EOF 00:31:42.435 )") 00:31:42.435 01:52:55 -- nvmf/common.sh@542 -- # cat 00:31:42.436 01:52:55 -- target/dif.sh@72 -- # (( file++ )) 00:31:42.436 01:52:55 -- target/dif.sh@72 -- # (( file <= files )) 00:31:42.436 01:52:55 -- nvmf/common.sh@544 -- # jq . 00:31:42.436 01:52:55 -- nvmf/common.sh@545 -- # IFS=, 00:31:42.436 01:52:55 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:42.436 "params": { 00:31:42.436 "name": "Nvme0", 00:31:42.436 "trtype": "tcp", 00:31:42.436 "traddr": "10.0.0.2", 00:31:42.436 "adrfam": "ipv4", 00:31:42.436 "trsvcid": "4420", 00:31:42.436 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:42.436 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:42.436 "hdgst": false, 00:31:42.436 "ddgst": false 00:31:42.436 }, 00:31:42.436 "method": "bdev_nvme_attach_controller" 00:31:42.436 },{ 00:31:42.436 "params": { 00:31:42.436 "name": "Nvme1", 00:31:42.436 "trtype": "tcp", 00:31:42.436 "traddr": "10.0.0.2", 00:31:42.436 "adrfam": "ipv4", 00:31:42.436 "trsvcid": "4420", 00:31:42.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:42.436 "hdgst": false, 00:31:42.436 "ddgst": false 00:31:42.436 }, 00:31:42.436 "method": "bdev_nvme_attach_controller" 00:31:42.436 }' 00:31:42.436 01:52:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:42.436 01:52:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:42.436 01:52:55 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:42.436 01:52:55 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:42.436 01:52:55 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:42.436 01:52:55 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:42.436 01:52:55 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:42.436 01:52:55 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:42.436 01:52:55 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:42.436 01:52:55 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:42.695 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:42.695 ... 00:31:42.695 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:42.695 ... 00:31:42.695 fio-3.35 00:31:42.695 Starting 4 threads 00:31:42.695 EAL: No free 2048 kB hugepages reported on node 1 00:31:43.263 [2024-07-23 01:52:56.193755] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:43.263 [2024-07-23 01:52:56.193863] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:48.550 00:31:48.550 filename0: (groupid=0, jobs=1): err= 0: pid=3923178: Tue Jul 23 01:53:01 2024 00:31:48.550 read: IOPS=1923, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5002msec) 00:31:48.550 slat (nsec): min=7104, max=44946, avg=11605.84, stdev=5025.05 00:31:48.550 clat (usec): min=1895, max=45728, avg=4123.64, stdev=1349.81 00:31:48.550 lat (usec): min=1912, max=45756, avg=4135.25, stdev=1349.69 00:31:48.550 clat percentiles (usec): 00:31:48.550 | 1.00th=[ 2999], 5.00th=[ 3425], 10.00th=[ 3589], 20.00th=[ 3752], 00:31:48.550 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:31:48.550 | 70.00th=[ 4047], 80.00th=[ 4178], 90.00th=[ 4883], 95.00th=[ 5800], 00:31:48.550 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 7177], 99.95th=[45876], 00:31:48.550 | 99.99th=[45876] 00:31:48.550 bw ( KiB/s): min=13995, max=15984, per=24.41%, avg=15386.70, stdev=567.55, samples=10 00:31:48.550 iops : min= 1749, max= 1998, avg=1923.30, stdev=71.05, samples=10 00:31:48.550 lat (msec) : 2=0.02%, 4=56.99%, 10=42.91%, 50=0.08% 00:31:48.550 cpu : usr=95.38%, sys=4.14%, ctx=9, majf=0, minf=0 00:31:48.550 IO depths : 1=0.2%, 2=3.4%, 4=67.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 complete : 0=0.0%, 4=93.6%, 8=6.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 issued rwts: total=9620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.550 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.550 filename0: (groupid=0, jobs=1): err= 0: pid=3923179: Tue Jul 23 01:53:01 2024 00:31:48.550 read: IOPS=1975, BW=15.4MiB/s (16.2MB/s)(77.2MiB/5001msec) 00:31:48.550 slat (nsec): min=7093, max=50190, avg=11885.37, stdev=5307.45 00:31:48.550 clat (usec): min=1318, max=6978, avg=4015.50, stdev=610.02 00:31:48.550 lat (usec): min=1336, max=6999, avg=4027.39, stdev=609.16 00:31:48.550 clat percentiles (usec): 00:31:48.550 | 1.00th=[ 2540], 5.00th=[ 3261], 10.00th=[ 3523], 20.00th=[ 3720], 00:31:48.550 | 30.00th=[ 3818], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 4015], 00:31:48.550 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4555], 95.00th=[ 5669], 00:31:48.550 | 99.00th=[ 6063], 99.50th=[ 6128], 99.90th=[ 6587], 99.95th=[ 6783], 00:31:48.550 | 99.99th=[ 6980] 00:31:48.550 bw ( KiB/s): min=15232, max=16400, per=25.04%, avg=15781.33, stdev=317.19, samples=9 00:31:48.550 iops : min= 1904, max= 2050, avg=1972.67, stdev=39.65, samples=9 00:31:48.550 lat (msec) : 2=0.06%, 4=59.71%, 10=40.23% 00:31:48.550 cpu : usr=95.50%, sys=4.02%, ctx=10, majf=0, minf=9 00:31:48.550 IO depths : 1=0.1%, 2=1.6%, 4=67.7%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 complete : 0=0.0%, 4=95.1%, 8=4.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 issued rwts: total=9880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.550 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.550 filename1: (groupid=0, jobs=1): err= 0: pid=3923180: Tue Jul 23 01:53:01 2024 00:31:48.550 read: IOPS=1983, BW=15.5MiB/s (16.3MB/s)(77.5MiB/5003msec) 00:31:48.550 slat (nsec): min=7125, max=54293, avg=11387.38, stdev=5370.53 00:31:48.550 clat (usec): min=1014, max=7607, avg=3996.09, stdev=529.40 00:31:48.550 lat (usec): min=1031, max=7628, avg=4007.48, stdev=528.84 00:31:48.550 clat percentiles (usec): 00:31:48.550 | 1.00th=[ 2507], 5.00th=[ 3359], 10.00th=[ 3556], 20.00th=[ 3720], 00:31:48.550 | 30.00th=[ 3818], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:31:48.550 | 70.00th=[ 4047], 80.00th=[ 4146], 90.00th=[ 4424], 95.00th=[ 4948], 00:31:48.550 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 6980], 99.95th=[ 7046], 00:31:48.550 | 99.99th=[ 7635] 00:31:48.550 bw ( KiB/s): min=15584, max=16256, per=25.18%, avg=15867.20, stdev=186.75, samples=10 00:31:48.550 iops : min= 1948, max= 2032, avg=1983.40, stdev=23.34, samples=10 00:31:48.550 lat (msec) : 2=0.08%, 4=56.62%, 10=43.29% 00:31:48.550 cpu : usr=94.70%, sys=4.72%, ctx=8, majf=0, minf=0 00:31:48.550 IO depths : 1=0.1%, 2=2.4%, 4=70.8%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.550 issued rwts: total=9925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.550 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.550 filename1: (groupid=0, jobs=1): err= 0: pid=3923181: Tue Jul 23 01:53:01 2024 00:31:48.551 read: IOPS=1996, BW=15.6MiB/s (16.4MB/s)(78.0MiB/5003msec) 00:31:48.551 slat (nsec): min=3603, max=68272, avg=15042.96, stdev=7539.08 00:31:48.551 clat (usec): min=1149, max=45232, avg=3958.64, stdev=1267.28 00:31:48.551 lat (usec): min=1178, max=45247, avg=3973.68, stdev=1266.98 00:31:48.551 clat percentiles (usec): 00:31:48.551 | 1.00th=[ 2704], 5.00th=[ 3195], 10.00th=[ 3425], 20.00th=[ 3654], 00:31:48.551 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3949], 60.00th=[ 3982], 00:31:48.551 | 70.00th=[ 4015], 80.00th=[ 4080], 90.00th=[ 4293], 95.00th=[ 4817], 00:31:48.551 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 7308], 99.95th=[45351], 00:31:48.551 | 99.99th=[45351] 00:31:48.551 bw ( KiB/s): min=14432, max=16512, per=25.34%, avg=15972.80, stdev=577.53, samples=10 00:31:48.551 iops : min= 1804, max= 2064, avg=1996.60, stdev=72.19, samples=10 00:31:48.551 lat (msec) : 2=0.08%, 4=64.35%, 10=35.49%, 50=0.08% 00:31:48.551 cpu : usr=90.28%, sys=6.68%, ctx=314, majf=0, minf=0 00:31:48.551 IO depths : 1=0.1%, 2=2.5%, 4=70.4%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.551 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.551 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.551 issued rwts: total=9988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.551 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:48.551 00:31:48.551 Run status group 0 (all jobs): 00:31:48.551 READ: bw=61.5MiB/s (64.5MB/s), 15.0MiB/s-15.6MiB/s (15.8MB/s-16.4MB/s), io=308MiB (323MB), run=5001-5003msec 00:31:48.551 01:53:01 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:48.551 01:53:01 -- target/dif.sh@43 -- # local sub 00:31:48.551 01:53:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.551 01:53:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.551 01:53:01 -- target/dif.sh@36 -- # local sub_id=0 00:31:48.551 01:53:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.551 01:53:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:48.551 01:53:01 -- target/dif.sh@36 -- # local sub_id=1 00:31:48.551 01:53:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 00:31:48.551 real 0m24.323s 00:31:48.551 user 4m31.004s 00:31:48.551 sys 0m7.697s 00:31:48.551 01:53:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 ************************************ 00:31:48.551 END TEST fio_dif_rand_params 00:31:48.551 ************************************ 00:31:48.551 01:53:01 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:48.551 01:53:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:48.551 01:53:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 ************************************ 00:31:48.551 START TEST fio_dif_digest 00:31:48.551 ************************************ 00:31:48.551 01:53:01 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:48.551 01:53:01 -- target/dif.sh@123 -- # local NULL_DIF 00:31:48.551 01:53:01 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:48.551 01:53:01 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:48.551 01:53:01 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:48.551 01:53:01 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:48.551 01:53:01 -- target/dif.sh@127 -- # numjobs=3 00:31:48.551 01:53:01 -- target/dif.sh@127 -- # iodepth=3 00:31:48.551 01:53:01 -- target/dif.sh@127 -- # runtime=10 00:31:48.551 01:53:01 -- target/dif.sh@128 -- # hdgst=true 00:31:48.551 01:53:01 -- target/dif.sh@128 -- # ddgst=true 00:31:48.551 01:53:01 -- target/dif.sh@130 -- # create_subsystems 0 00:31:48.551 01:53:01 -- target/dif.sh@28 -- # local sub 00:31:48.551 01:53:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:48.551 01:53:01 -- target/dif.sh@31 -- # create_subsystem 0 00:31:48.551 01:53:01 -- target/dif.sh@18 -- # local sub_id=0 00:31:48.551 01:53:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 bdev_null0 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.551 01:53:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:48.551 01:53:01 -- common/autotest_common.sh@10 -- # set +x 00:31:48.551 [2024-07-23 01:53:01.590354] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.551 01:53:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:48.551 01:53:01 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:48.551 01:53:01 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:48.551 01:53:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:48.551 01:53:01 -- nvmf/common.sh@520 -- # config=() 00:31:48.551 01:53:01 -- nvmf/common.sh@520 -- # local subsystem config 00:31:48.551 01:53:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:48.551 01:53:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.551 01:53:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:48.551 { 00:31:48.551 "params": { 00:31:48.551 "name": "Nvme$subsystem", 00:31:48.551 "trtype": "$TEST_TRANSPORT", 00:31:48.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:48.551 "adrfam": "ipv4", 00:31:48.551 "trsvcid": "$NVMF_PORT", 00:31:48.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:48.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:48.551 "hdgst": ${hdgst:-false}, 00:31:48.551 "ddgst": ${ddgst:-false} 00:31:48.551 }, 00:31:48.551 "method": "bdev_nvme_attach_controller" 00:31:48.551 } 00:31:48.551 EOF 00:31:48.551 )") 00:31:48.551 01:53:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.551 01:53:01 -- target/dif.sh@82 -- # gen_fio_conf 00:31:48.551 01:53:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:48.551 01:53:01 -- target/dif.sh@54 -- # local file 00:31:48.551 01:53:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:48.551 01:53:01 -- target/dif.sh@56 -- # cat 00:31:48.551 01:53:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:48.551 01:53:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.551 01:53:01 -- common/autotest_common.sh@1320 -- # shift 00:31:48.551 01:53:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:48.551 01:53:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.551 01:53:01 -- nvmf/common.sh@542 -- # cat 00:31:48.551 01:53:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.551 01:53:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:48.551 01:53:01 -- nvmf/common.sh@544 -- # jq . 00:31:48.551 01:53:01 -- nvmf/common.sh@545 -- # IFS=, 00:31:48.551 01:53:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:48.551 "params": { 00:31:48.551 "name": "Nvme0", 00:31:48.551 "trtype": "tcp", 00:31:48.551 "traddr": "10.0.0.2", 00:31:48.551 "adrfam": "ipv4", 00:31:48.551 "trsvcid": "4420", 00:31:48.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:48.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:48.551 "hdgst": true, 00:31:48.551 "ddgst": true 00:31:48.551 }, 00:31:48.551 "method": "bdev_nvme_attach_controller" 00:31:48.551 }' 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:48.551 01:53:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:48.551 01:53:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:48.551 01:53:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:48.551 01:53:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:48.552 01:53:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:48.552 01:53:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:48.812 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:48.812 ... 00:31:48.812 fio-3.35 00:31:48.812 Starting 3 threads 00:31:48.812 EAL: No free 2048 kB hugepages reported on node 1 00:31:49.379 [2024-07-23 01:53:02.219005] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:49.379 [2024-07-23 01:53:02.219096] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:59.361 00:31:59.361 filename0: (groupid=0, jobs=1): err= 0: pid=3923949: Tue Jul 23 01:53:12 2024 00:31:59.361 read: IOPS=184, BW=23.1MiB/s (24.2MB/s)(232MiB/10045msec) 00:31:59.361 slat (nsec): min=5929, max=50808, avg=15393.07, stdev=5072.10 00:31:59.361 clat (usec): min=9133, max=59408, avg=16185.47, stdev=3554.67 00:31:59.361 lat (usec): min=9146, max=59425, avg=16200.86, stdev=3554.91 00:31:59.361 clat percentiles (usec): 00:31:59.361 | 1.00th=[10028], 5.00th=[11731], 10.00th=[13960], 20.00th=[14877], 00:31:59.361 | 30.00th=[15401], 40.00th=[15795], 50.00th=[16057], 60.00th=[16450], 00:31:59.361 | 70.00th=[16909], 80.00th=[17433], 90.00th=[17957], 95.00th=[18482], 00:31:59.361 | 99.00th=[19792], 99.50th=[53740], 99.90th=[56886], 99.95th=[59507], 00:31:59.361 | 99.99th=[59507] 00:31:59.361 bw ( KiB/s): min=21248, max=26368, per=31.49%, avg=23744.00, stdev=1132.37, samples=20 00:31:59.361 iops : min= 166, max= 206, avg=185.50, stdev= 8.85, samples=20 00:31:59.361 lat (msec) : 10=0.92%, 20=98.12%, 50=0.43%, 100=0.54% 00:31:59.361 cpu : usr=92.58%, sys=6.73%, ctx=26, majf=0, minf=169 00:31:59.361 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.361 issued rwts: total=1857,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.361 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.361 filename0: (groupid=0, jobs=1): err= 0: pid=3923950: Tue Jul 23 01:53:12 2024 00:31:59.361 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10046msec) 00:31:59.361 slat (usec): min=7, max=157, avg=15.51, stdev= 5.79 00:31:59.361 clat (usec): min=9695, max=57757, avg=15218.68, stdev=4872.47 00:31:59.361 lat (usec): min=9712, max=57773, avg=15234.19, stdev=4872.46 00:31:59.361 clat percentiles (usec): 00:31:59.361 | 1.00th=[10421], 5.00th=[12387], 10.00th=[13173], 20.00th=[13829], 00:31:59.361 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:31:59.361 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16319], 95.00th=[16909], 00:31:59.361 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57410], 99.95th=[57934], 00:31:59.361 | 99.99th=[57934] 00:31:59.361 bw ( KiB/s): min=23296, max=27904, per=33.48%, avg=25244.10, stdev=1347.73, samples=20 00:31:59.361 iops : min= 182, max= 218, avg=197.20, stdev=10.53, samples=20 00:31:59.361 lat (msec) : 10=0.35%, 20=98.13%, 50=0.20%, 100=1.32% 00:31:59.361 cpu : usr=92.57%, sys=6.77%, ctx=17, majf=0, minf=147 00:31:59.362 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.362 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.362 filename0: (groupid=0, jobs=1): err= 0: pid=3923951: Tue Jul 23 01:53:12 2024 00:31:59.362 read: IOPS=207, BW=25.9MiB/s (27.2MB/s)(261MiB/10046msec) 00:31:59.362 slat (nsec): min=4655, max=68605, avg=19125.56, stdev=6832.22 00:31:59.362 clat (usec): min=8486, max=57384, avg=14408.36, stdev=3691.96 00:31:59.362 lat (usec): min=8506, max=57398, avg=14427.49, stdev=3691.49 00:31:59.362 clat percentiles (usec): 00:31:59.362 | 1.00th=[ 9765], 5.00th=[11076], 10.00th=[12387], 20.00th=[13173], 00:31:59.362 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14353], 60.00th=[14615], 00:31:59.362 | 70.00th=[14877], 80.00th=[15270], 90.00th=[15795], 95.00th=[16188], 00:31:59.362 | 99.00th=[17957], 99.50th=[55313], 99.90th=[56886], 99.95th=[57410], 00:31:59.362 | 99.99th=[57410] 00:31:59.362 bw ( KiB/s): min=24064, max=28672, per=35.37%, avg=26662.40, stdev=1243.85, samples=20 00:31:59.362 iops : min= 188, max= 224, avg=208.30, stdev= 9.72, samples=20 00:31:59.362 lat (msec) : 10=1.39%, 20=97.79%, 50=0.19%, 100=0.62% 00:31:59.362 cpu : usr=90.94%, sys=7.63%, ctx=281, majf=0, minf=215 00:31:59.362 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.362 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.362 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.362 issued rwts: total=2085,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.362 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:59.362 00:31:59.362 Run status group 0 (all jobs): 00:31:59.362 READ: bw=73.6MiB/s (77.2MB/s), 23.1MiB/s-25.9MiB/s (24.2MB/s-27.2MB/s), io=740MiB (776MB), run=10045-10046msec 00:31:59.621 01:53:12 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:59.621 01:53:12 -- target/dif.sh@43 -- # local sub 00:31:59.621 01:53:12 -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.621 01:53:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:59.621 01:53:12 -- target/dif.sh@36 -- # local sub_id=0 00:31:59.621 01:53:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.621 01:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.621 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:31:59.621 01:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.621 01:53:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:59.621 01:53:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:59.621 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:31:59.621 01:53:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:59.621 00:31:59.621 real 0m11.062s 00:31:59.621 user 0m28.788s 00:31:59.621 sys 0m2.407s 00:31:59.621 01:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.621 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:31:59.621 ************************************ 00:31:59.621 END TEST fio_dif_digest 00:31:59.621 ************************************ 00:31:59.621 01:53:12 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:59.621 01:53:12 -- target/dif.sh@147 -- # nvmftestfini 00:31:59.621 01:53:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:59.621 01:53:12 -- nvmf/common.sh@116 -- # sync 00:31:59.621 01:53:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:59.621 01:53:12 -- nvmf/common.sh@119 -- # set +e 00:31:59.621 01:53:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:59.621 01:53:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:59.621 rmmod nvme_tcp 00:31:59.621 rmmod nvme_fabrics 00:31:59.621 rmmod nvme_keyring 00:31:59.621 01:53:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:59.621 01:53:12 -- nvmf/common.sh@123 -- # set -e 00:31:59.621 01:53:12 -- nvmf/common.sh@124 -- # return 0 00:31:59.621 01:53:12 -- nvmf/common.sh@477 -- # '[' -n 3917604 ']' 00:31:59.621 01:53:12 -- nvmf/common.sh@478 -- # killprocess 3917604 00:31:59.621 01:53:12 -- common/autotest_common.sh@926 -- # '[' -z 3917604 ']' 00:31:59.621 01:53:12 -- common/autotest_common.sh@930 -- # kill -0 3917604 00:31:59.621 01:53:12 -- common/autotest_common.sh@931 -- # uname 00:31:59.621 01:53:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:59.621 01:53:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3917604 00:31:59.879 01:53:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:59.879 01:53:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:59.879 01:53:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3917604' 00:31:59.879 killing process with pid 3917604 00:31:59.879 01:53:12 -- common/autotest_common.sh@945 -- # kill 3917604 00:31:59.879 01:53:12 -- common/autotest_common.sh@950 -- # wait 3917604 00:31:59.879 01:53:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:59.879 01:53:12 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:01.257 Waiting for block devices as requested 00:32:01.257 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:01.257 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:01.257 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:01.516 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:01.516 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:01.516 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:01.516 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:01.774 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:01.774 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:01.774 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:01.774 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:02.032 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:02.032 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:02.032 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:02.032 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:02.292 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:02.292 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:02.292 01:53:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:02.292 01:53:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:02.292 01:53:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:02.292 01:53:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:02.292 01:53:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.292 01:53:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:02.292 01:53:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.861 01:53:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:04.861 00:32:04.861 real 1m7.426s 00:32:04.861 user 6m28.252s 00:32:04.861 sys 0m19.309s 00:32:04.861 01:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:04.861 01:53:17 -- common/autotest_common.sh@10 -- # set +x 00:32:04.861 ************************************ 00:32:04.861 END TEST nvmf_dif 00:32:04.861 ************************************ 00:32:04.861 01:53:17 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.861 01:53:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:04.861 01:53:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:04.861 01:53:17 -- common/autotest_common.sh@10 -- # set +x 00:32:04.861 ************************************ 00:32:04.861 START TEST nvmf_abort_qd_sizes 00:32:04.861 ************************************ 00:32:04.861 01:53:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:04.861 * Looking for test storage... 00:32:04.861 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:04.861 01:53:17 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.861 01:53:17 -- nvmf/common.sh@7 -- # uname -s 00:32:04.861 01:53:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.861 01:53:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.861 01:53:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.861 01:53:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.861 01:53:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:04.861 01:53:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:04.861 01:53:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.861 01:53:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:04.861 01:53:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.861 01:53:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:04.861 01:53:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.861 01:53:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:04.861 01:53:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.861 01:53:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:04.861 01:53:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:04.861 01:53:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.861 01:53:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.861 01:53:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.861 01:53:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.861 01:53:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.861 01:53:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.861 01:53:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.861 01:53:17 -- paths/export.sh@5 -- # export PATH 00:32:04.861 01:53:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.861 01:53:17 -- nvmf/common.sh@46 -- # : 0 00:32:04.861 01:53:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:04.861 01:53:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:04.861 01:53:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:04.861 01:53:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.861 01:53:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.861 01:53:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:04.861 01:53:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:04.861 01:53:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:04.861 01:53:17 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:04.861 01:53:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:04.861 01:53:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.861 01:53:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:04.861 01:53:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:04.861 01:53:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:04.861 01:53:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:04.861 01:53:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:04.861 01:53:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:04.861 01:53:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:04.861 01:53:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:04.861 01:53:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:04.861 01:53:17 -- common/autotest_common.sh@10 -- # set +x 00:32:06.235 01:53:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:06.235 01:53:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:06.235 01:53:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:06.235 01:53:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:06.235 01:53:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:06.235 01:53:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:06.235 01:53:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:06.235 01:53:19 -- nvmf/common.sh@294 -- # net_devs=() 00:32:06.235 01:53:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:06.235 01:53:19 -- nvmf/common.sh@295 -- # e810=() 00:32:06.235 01:53:19 -- nvmf/common.sh@295 -- # local -ga e810 00:32:06.235 01:53:19 -- nvmf/common.sh@296 -- # x722=() 00:32:06.235 01:53:19 -- nvmf/common.sh@296 -- # local -ga x722 00:32:06.235 01:53:19 -- nvmf/common.sh@297 -- # mlx=() 00:32:06.235 01:53:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:06.235 01:53:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.235 01:53:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:06.235 01:53:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:06.235 01:53:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:06.235 01:53:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.235 01:53:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:06.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:06.235 01:53:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:06.235 01:53:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:06.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:06.235 01:53:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:06.235 01:53:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:06.235 01:53:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.494 01:53:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.495 01:53:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.495 01:53:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.495 01:53:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:06.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:06.495 01:53:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.495 01:53:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:06.495 01:53:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.495 01:53:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:06.495 01:53:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.495 01:53:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:06.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:06.495 01:53:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.495 01:53:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:06.495 01:53:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:06.495 01:53:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:06.495 01:53:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:06.495 01:53:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:06.495 01:53:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:06.495 01:53:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.495 01:53:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:06.495 01:53:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:06.495 01:53:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:06.495 01:53:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:06.495 01:53:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:06.495 01:53:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:06.495 01:53:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.495 01:53:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:06.495 01:53:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:06.495 01:53:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:06.495 01:53:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:06.495 01:53:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:06.495 01:53:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:06.495 01:53:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:06.495 01:53:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:06.495 01:53:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:06.495 01:53:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:06.495 01:53:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:06.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:32:06.495 00:32:06.495 --- 10.0.0.2 ping statistics --- 00:32:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.495 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:32:06.495 01:53:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:06.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:32:06.495 00:32:06.495 --- 10.0.0.1 ping statistics --- 00:32:06.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.495 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:32:06.495 01:53:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.495 01:53:19 -- nvmf/common.sh@410 -- # return 0 00:32:06.495 01:53:19 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:06.495 01:53:19 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:07.868 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:07.868 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:07.868 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:08.804 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:08.804 01:53:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:08.804 01:53:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:08.804 01:53:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:08.804 01:53:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:08.804 01:53:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:08.804 01:53:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:08.804 01:53:21 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:08.804 01:53:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:08.804 01:53:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:08.804 01:53:21 -- common/autotest_common.sh@10 -- # set +x 00:32:08.804 01:53:21 -- nvmf/common.sh@469 -- # nvmfpid=3928842 00:32:08.804 01:53:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:08.804 01:53:21 -- nvmf/common.sh@470 -- # waitforlisten 3928842 00:32:08.804 01:53:21 -- common/autotest_common.sh@819 -- # '[' -z 3928842 ']' 00:32:08.804 01:53:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:08.804 01:53:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:08.804 01:53:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:08.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:08.804 01:53:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:08.804 01:53:21 -- common/autotest_common.sh@10 -- # set +x 00:32:08.804 [2024-07-23 01:53:21.877492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:32:08.804 [2024-07-23 01:53:21.877563] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.062 EAL: No free 2048 kB hugepages reported on node 1 00:32:09.062 [2024-07-23 01:53:21.942945] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:09.062 [2024-07-23 01:53:22.029834] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:09.062 [2024-07-23 01:53:22.029985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.062 [2024-07-23 01:53:22.030002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.062 [2024-07-23 01:53:22.030013] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.062 [2024-07-23 01:53:22.030064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.062 [2024-07-23 01:53:22.030124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.062 [2024-07-23 01:53:22.030190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:09.062 [2024-07-23 01:53:22.030192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.994 01:53:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:09.994 01:53:22 -- common/autotest_common.sh@852 -- # return 0 00:32:09.994 01:53:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:09.994 01:53:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:09.994 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:32:09.994 01:53:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:09.994 01:53:22 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:09.994 01:53:22 -- scripts/common.sh@312 -- # local nvmes 00:32:09.994 01:53:22 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:09.994 01:53:22 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:09.994 01:53:22 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:09.994 01:53:22 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:09.994 01:53:22 -- scripts/common.sh@322 -- # uname -s 00:32:09.994 01:53:22 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:09.994 01:53:22 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:09.994 01:53:22 -- scripts/common.sh@327 -- # (( 1 )) 00:32:09.994 01:53:22 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:09.994 01:53:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:09.994 01:53:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:09.994 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:32:09.994 ************************************ 00:32:09.994 START TEST spdk_target_abort 00:32:09.994 ************************************ 00:32:09.994 01:53:22 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:09.994 01:53:22 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:09.995 01:53:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.995 01:53:22 -- common/autotest_common.sh@10 -- # set +x 00:32:13.269 spdk_targetn1 00:32:13.269 01:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:13.269 01:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.269 01:53:25 -- common/autotest_common.sh@10 -- # set +x 00:32:13.269 [2024-07-23 01:53:25.680534] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:13.269 01:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:13.269 01:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.269 01:53:25 -- common/autotest_common.sh@10 -- # set +x 00:32:13.269 01:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:13.269 01:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.269 01:53:25 -- common/autotest_common.sh@10 -- # set +x 00:32:13.269 01:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:13.269 01:53:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:13.269 01:53:25 -- common/autotest_common.sh@10 -- # set +x 00:32:13.269 [2024-07-23 01:53:25.712851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:13.269 01:53:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:13.269 01:53:25 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.270 01:53:25 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:13.270 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.549 Initializing NVMe Controllers 00:32:16.549 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:16.549 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:16.549 Initialization complete. Launching workers. 00:32:16.549 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10698, failed: 0 00:32:16.549 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1289, failed to submit 9409 00:32:16.549 success 809, unsuccess 480, failed 0 00:32:16.549 01:53:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.549 01:53:29 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:16.549 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.829 Initializing NVMe Controllers 00:32:19.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:19.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:19.829 Initialization complete. Launching workers. 00:32:19.829 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8583, failed: 0 00:32:19.829 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 7354 00:32:19.829 success 353, unsuccess 876, failed 0 00:32:19.829 01:53:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:19.829 01:53:32 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:32:23.107 Initializing NVMe Controllers 00:32:23.107 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:23.107 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:23.107 Initialization complete. Launching workers. 00:32:23.107 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32046, failed: 0 00:32:23.107 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2724, failed to submit 29322 00:32:23.107 success 547, unsuccess 2177, failed 0 00:32:23.107 01:53:35 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:23.107 01:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.107 01:53:35 -- common/autotest_common.sh@10 -- # set +x 00:32:23.107 01:53:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:23.107 01:53:35 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:23.107 01:53:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:23.107 01:53:35 -- common/autotest_common.sh@10 -- # set +x 00:32:24.040 01:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:24.040 01:53:36 -- target/abort_qd_sizes.sh@62 -- # killprocess 3928842 00:32:24.040 01:53:36 -- common/autotest_common.sh@926 -- # '[' -z 3928842 ']' 00:32:24.040 01:53:36 -- common/autotest_common.sh@930 -- # kill -0 3928842 00:32:24.040 01:53:36 -- common/autotest_common.sh@931 -- # uname 00:32:24.040 01:53:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:24.040 01:53:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3928842 00:32:24.040 01:53:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:24.040 01:53:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:24.040 01:53:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3928842' 00:32:24.040 killing process with pid 3928842 00:32:24.040 01:53:36 -- common/autotest_common.sh@945 -- # kill 3928842 00:32:24.040 01:53:36 -- common/autotest_common.sh@950 -- # wait 3928842 00:32:24.040 00:32:24.040 real 0m14.233s 00:32:24.040 user 0m56.248s 00:32:24.040 sys 0m2.765s 00:32:24.040 01:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.040 01:53:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.040 ************************************ 00:32:24.040 END TEST spdk_target_abort 00:32:24.040 ************************************ 00:32:24.040 01:53:37 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:24.040 01:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:24.040 01:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:24.040 01:53:37 -- common/autotest_common.sh@10 -- # set +x 00:32:24.040 ************************************ 00:32:24.040 START TEST kernel_target_abort 00:32:24.040 ************************************ 00:32:24.040 01:53:37 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:24.040 01:53:37 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:24.040 01:53:37 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:24.040 01:53:37 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:24.040 01:53:37 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:24.040 01:53:37 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:24.040 01:53:37 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:24.040 01:53:37 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:24.040 01:53:37 -- nvmf/common.sh@627 -- # local block nvme 00:32:24.040 01:53:37 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:24.040 01:53:37 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:24.040 01:53:37 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:24.040 01:53:37 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.414 Waiting for block devices as requested 00:32:25.414 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.414 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.414 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.414 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:25.672 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:25.672 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:25.672 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:25.672 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:25.930 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:25.930 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:25.930 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:25.930 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:26.189 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:26.189 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:26.189 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:26.478 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:26.478 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:26.478 01:53:39 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:26.478 01:53:39 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:26.478 01:53:39 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:26.478 01:53:39 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:26.478 01:53:39 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:26.478 No valid GPT data, bailing 00:32:26.478 01:53:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:26.741 01:53:39 -- scripts/common.sh@393 -- # pt= 00:32:26.741 01:53:39 -- scripts/common.sh@394 -- # return 1 00:32:26.741 01:53:39 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:26.741 01:53:39 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:26.741 01:53:39 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:26.741 01:53:39 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:26.741 01:53:39 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:26.741 01:53:39 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:26.741 01:53:39 -- nvmf/common.sh@654 -- # echo 1 00:32:26.741 01:53:39 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:26.741 01:53:39 -- nvmf/common.sh@656 -- # echo 1 00:32:26.741 01:53:39 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:26.741 01:53:39 -- nvmf/common.sh@663 -- # echo tcp 00:32:26.741 01:53:39 -- nvmf/common.sh@664 -- # echo 4420 00:32:26.741 01:53:39 -- nvmf/common.sh@665 -- # echo ipv4 00:32:26.741 01:53:39 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:26.741 01:53:39 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:26.741 00:32:26.741 Discovery Log Number of Records 2, Generation counter 2 00:32:26.741 =====Discovery Log Entry 0====== 00:32:26.741 trtype: tcp 00:32:26.741 adrfam: ipv4 00:32:26.741 subtype: current discovery subsystem 00:32:26.741 treq: not specified, sq flow control disable supported 00:32:26.741 portid: 1 00:32:26.741 trsvcid: 4420 00:32:26.741 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:26.741 traddr: 10.0.0.1 00:32:26.741 eflags: none 00:32:26.741 sectype: none 00:32:26.741 =====Discovery Log Entry 1====== 00:32:26.741 trtype: tcp 00:32:26.741 adrfam: ipv4 00:32:26.741 subtype: nvme subsystem 00:32:26.741 treq: not specified, sq flow control disable supported 00:32:26.741 portid: 1 00:32:26.741 trsvcid: 4420 00:32:26.741 subnqn: kernel_target 00:32:26.741 traddr: 10.0.0.1 00:32:26.741 eflags: none 00:32:26.741 sectype: none 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:26.741 01:53:39 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:26.741 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.019 Initializing NVMe Controllers 00:32:30.019 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:30.019 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:30.019 Initialization complete. Launching workers. 00:32:30.019 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29709, failed: 0 00:32:30.019 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29709, failed to submit 0 00:32:30.019 success 0, unsuccess 29709, failed 0 00:32:30.019 01:53:42 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.019 01:53:42 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:30.019 EAL: No free 2048 kB hugepages reported on node 1 00:32:33.295 Initializing NVMe Controllers 00:32:33.295 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:33.295 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:33.295 Initialization complete. Launching workers. 00:32:33.295 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 60159, failed: 0 00:32:33.295 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15142, failed to submit 45017 00:32:33.295 success 0, unsuccess 15142, failed 0 00:32:33.295 01:53:45 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.295 01:53:45 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:33.295 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.575 Initializing NVMe Controllers 00:32:36.575 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:36.575 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:36.575 Initialization complete. Launching workers. 00:32:36.575 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 58895, failed: 0 00:32:36.575 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14686, failed to submit 44209 00:32:36.575 success 0, unsuccess 14686, failed 0 00:32:36.575 01:53:48 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:36.575 01:53:48 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:36.575 01:53:48 -- nvmf/common.sh@677 -- # echo 0 00:32:36.575 01:53:49 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:36.575 01:53:49 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:36.575 01:53:49 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:36.575 01:53:49 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:36.575 01:53:49 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:36.575 01:53:49 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:36.575 00:32:36.575 real 0m11.950s 00:32:36.575 user 0m4.190s 00:32:36.575 sys 0m2.566s 00:32:36.575 01:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.575 01:53:49 -- common/autotest_common.sh@10 -- # set +x 00:32:36.575 ************************************ 00:32:36.575 END TEST kernel_target_abort 00:32:36.575 ************************************ 00:32:36.575 01:53:49 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:36.575 01:53:49 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:36.575 01:53:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:36.575 01:53:49 -- nvmf/common.sh@116 -- # sync 00:32:36.575 01:53:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:36.575 01:53:49 -- nvmf/common.sh@119 -- # set +e 00:32:36.575 01:53:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:36.575 01:53:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:36.575 rmmod nvme_tcp 00:32:36.575 rmmod nvme_fabrics 00:32:36.575 rmmod nvme_keyring 00:32:36.575 01:53:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:36.575 01:53:49 -- nvmf/common.sh@123 -- # set -e 00:32:36.575 01:53:49 -- nvmf/common.sh@124 -- # return 0 00:32:36.575 01:53:49 -- nvmf/common.sh@477 -- # '[' -n 3928842 ']' 00:32:36.575 01:53:49 -- nvmf/common.sh@478 -- # killprocess 3928842 00:32:36.575 01:53:49 -- common/autotest_common.sh@926 -- # '[' -z 3928842 ']' 00:32:36.575 01:53:49 -- common/autotest_common.sh@930 -- # kill -0 3928842 00:32:36.575 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3928842) - No such process 00:32:36.575 01:53:49 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3928842 is not found' 00:32:36.575 Process with pid 3928842 is not found 00:32:36.575 01:53:49 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:36.575 01:53:49 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:37.511 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:37.511 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:37.511 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:37.511 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:37.511 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:37.511 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:37.511 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:37.511 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:37.511 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:37.511 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:37.511 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:37.511 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:37.511 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:37.511 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:37.511 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:37.511 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:37.511 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:37.511 01:53:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:37.511 01:53:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:37.511 01:53:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:37.511 01:53:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:37.511 01:53:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:37.511 01:53:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:37.511 01:53:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:40.054 01:53:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:40.054 00:32:40.054 real 0m35.225s 00:32:40.054 user 1m2.719s 00:32:40.054 sys 0m8.665s 00:32:40.054 01:53:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.054 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:32:40.054 ************************************ 00:32:40.054 END TEST nvmf_abort_qd_sizes 00:32:40.054 ************************************ 00:32:40.054 01:53:52 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:40.054 01:53:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:40.054 01:53:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:40.054 01:53:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:40.054 01:53:52 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:40.054 01:53:52 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:40.054 01:53:52 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:40.054 01:53:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:40.054 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:32:40.054 01:53:52 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:40.054 01:53:52 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:40.054 01:53:52 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:40.054 01:53:52 -- common/autotest_common.sh@10 -- # set +x 00:32:41.431 INFO: APP EXITING 00:32:41.431 INFO: killing all VMs 00:32:41.431 INFO: killing vhost app 00:32:41.431 INFO: EXIT DONE 00:32:42.804 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:42.804 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:42.804 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:42.804 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:42.804 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:42.804 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:42.804 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:42.804 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:42.804 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:42.804 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:42.804 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:42.804 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:42.805 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:42.805 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:42.805 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:42.805 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:42.805 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:44.178 Cleaning 00:32:44.178 Removing: /var/run/dpdk/spdk0/config 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:44.178 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:44.178 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:44.178 Removing: /var/run/dpdk/spdk1/config 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:44.178 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:44.178 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:44.178 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:44.178 Removing: /var/run/dpdk/spdk2/config 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:44.178 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:44.178 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:44.178 Removing: /var/run/dpdk/spdk3/config 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:44.178 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:44.178 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:44.178 Removing: /var/run/dpdk/spdk4/config 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:44.178 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:44.178 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:44.178 Removing: /dev/shm/bdev_svc_trace.1 00:32:44.178 Removing: /dev/shm/nvmf_trace.0 00:32:44.178 Removing: /dev/shm/spdk_tgt_trace.pid3653875 00:32:44.178 Removing: /var/run/dpdk/spdk0 00:32:44.178 Removing: /var/run/dpdk/spdk1 00:32:44.178 Removing: /var/run/dpdk/spdk2 00:32:44.178 Removing: /var/run/dpdk/spdk3 00:32:44.178 Removing: /var/run/dpdk/spdk4 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3652176 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3652920 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3653875 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3654353 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3656051 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3657138 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3657393 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3657641 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3657976 00:32:44.178 Removing: /var/run/dpdk/spdk_pid3658170 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3658329 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3658493 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3658675 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3659256 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3661675 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3661972 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3662143 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3662285 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3662595 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3662737 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663174 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663308 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663484 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663622 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663789 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3663930 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3664306 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3664462 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3664654 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3664951 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3664979 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3665072 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3665300 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3665466 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3665609 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3665773 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666025 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666187 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666335 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666491 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666748 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3666914 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667052 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667218 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667475 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667634 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667779 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3667938 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3668196 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3668361 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3668503 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3668666 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3668926 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669083 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669230 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669388 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669655 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669812 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3669953 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3670132 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3670376 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3670539 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3670680 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3670908 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3671106 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3671266 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3671416 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3671662 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3671844 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3672001 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3672142 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3672415 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3672489 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3672695 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3674892 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3730313 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3732891 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3740084 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3743587 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3746527 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3747019 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3750768 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3750775 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3751453 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3752129 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3752680 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3753092 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3753224 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3753363 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3753504 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3753507 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3754177 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3754739 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3755418 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3755825 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3755834 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3756093 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3757140 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3757888 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3763488 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3763772 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3766330 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3770215 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3772322 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3779483 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3784958 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3786192 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3786891 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3797401 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3799619 00:32:44.438 Removing: /var/run/dpdk/spdk_pid3802449 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3803666 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3805012 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3805172 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3805350 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3805592 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3806159 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3807552 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3808444 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3808888 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3813120 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3816445 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3820085 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3844226 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3847060 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3850852 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3851865 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3852989 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3855689 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3858095 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3862464 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3862471 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3865404 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3865544 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3865678 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3866084 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3866089 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3867196 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3868430 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3869645 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3870865 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3872195 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3873932 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3877921 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3878264 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3879579 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3880340 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3884121 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3886167 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3889771 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3893411 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3897075 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3897494 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3897915 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3898344 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3898929 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3899480 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3900031 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3900455 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3903121 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3903342 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3907740 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3907926 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3909608 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3914734 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3914840 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3917786 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3919224 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3920668 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3921549 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3922995 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3923897 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3929284 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3929688 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3930090 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3931566 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3931976 00:32:44.697 Removing: /var/run/dpdk/spdk_pid3932386 00:32:44.697 Clean 00:32:44.698 killing process with pid 3624601 00:32:52.809 killing process with pid 3624598 00:32:52.809 killing process with pid 3624600 00:32:53.068 killing process with pid 3624599 00:32:53.068 01:54:05 -- common/autotest_common.sh@1436 -- # return 0 00:32:53.068 01:54:05 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:53.068 01:54:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.068 01:54:05 -- common/autotest_common.sh@10 -- # set +x 00:32:53.068 01:54:05 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:53.068 01:54:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:53.068 01:54:05 -- common/autotest_common.sh@10 -- # set +x 00:32:53.068 01:54:05 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:53.068 01:54:05 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:53.068 01:54:05 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:53.068 01:54:05 -- spdk/autotest.sh@394 -- # hash lcov 00:32:53.068 01:54:05 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:53.068 01:54:05 -- spdk/autotest.sh@396 -- # hostname 00:32:53.068 01:54:05 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:53.068 geninfo: WARNING: invalid characters removed from testname! 00:33:19.648 01:54:31 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.940 01:54:35 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:25.479 01:54:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:28.772 01:54:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:31.304 01:54:44 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.834 01:54:46 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:37.117 01:54:49 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:37.117 01:54:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:37.117 01:54:49 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:37.117 01:54:49 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:37.117 01:54:49 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:37.117 01:54:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.117 01:54:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.117 01:54:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.117 01:54:49 -- paths/export.sh@5 -- $ export PATH 00:33:37.117 01:54:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:37.117 01:54:49 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:37.117 01:54:49 -- common/autobuild_common.sh@438 -- $ date +%s 00:33:37.117 01:54:49 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721692489.XXXXXX 00:33:37.117 01:54:49 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721692489.HuiSv2 00:33:37.117 01:54:49 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:33:37.117 01:54:49 -- common/autobuild_common.sh@444 -- $ '[' -n v23.11 ']' 00:33:37.117 01:54:49 -- common/autobuild_common.sh@445 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:37.117 01:54:49 -- common/autobuild_common.sh@445 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:37.117 01:54:49 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:37.117 01:54:49 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:37.117 01:54:49 -- common/autobuild_common.sh@454 -- $ get_config_params 00:33:37.117 01:54:49 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:37.117 01:54:49 -- common/autotest_common.sh@10 -- $ set +x 00:33:37.118 01:54:49 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:37.118 01:54:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:37.118 01:54:49 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.118 01:54:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:37.118 01:54:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:37.118 01:54:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:37.118 01:54:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:37.118 01:54:49 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:37.118 01:54:49 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:37.118 01:54:49 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:37.118 01:54:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:37.118 + [[ -n 3569485 ]] 00:33:37.118 + sudo kill 3569485 00:33:37.127 [Pipeline] } 00:33:37.146 [Pipeline] // stage 00:33:37.152 [Pipeline] } 00:33:37.169 [Pipeline] // timeout 00:33:37.175 [Pipeline] } 00:33:37.192 [Pipeline] // catchError 00:33:37.197 [Pipeline] } 00:33:37.217 [Pipeline] // wrap 00:33:37.224 [Pipeline] } 00:33:37.239 [Pipeline] // catchError 00:33:37.249 [Pipeline] stage 00:33:37.251 [Pipeline] { (Epilogue) 00:33:37.265 [Pipeline] catchError 00:33:37.267 [Pipeline] { 00:33:37.281 [Pipeline] echo 00:33:37.283 Cleanup processes 00:33:37.289 [Pipeline] sh 00:33:37.589 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.589 3945128 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.622 [Pipeline] sh 00:33:37.915 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:37.915 ++ grep -v 'sudo pgrep' 00:33:37.915 ++ awk '{print $1}' 00:33:37.915 + sudo kill -9 00:33:37.915 + true 00:33:37.927 [Pipeline] sh 00:33:38.211 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:48.191 [Pipeline] sh 00:33:48.478 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:48.479 Artifacts sizes are good 00:33:48.494 [Pipeline] archiveArtifacts 00:33:48.501 Archiving artifacts 00:33:48.731 [Pipeline] sh 00:33:49.016 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:49.030 [Pipeline] cleanWs 00:33:49.041 [WS-CLEANUP] Deleting project workspace... 00:33:49.041 [WS-CLEANUP] Deferred wipeout is used... 00:33:49.049 [WS-CLEANUP] done 00:33:49.050 [Pipeline] } 00:33:49.070 [Pipeline] // catchError 00:33:49.083 [Pipeline] sh 00:33:49.364 + logger -p user.info -t JENKINS-CI 00:33:49.372 [Pipeline] } 00:33:49.388 [Pipeline] // stage 00:33:49.394 [Pipeline] } 00:33:49.410 [Pipeline] // node 00:33:49.416 [Pipeline] End of Pipeline 00:33:49.456 Finished: SUCCESS